Updates from: 01/11/2021 04:07:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/add-identity-provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-identity-provider.md
@@ -6,7 +6,7 @@ author: msmimart
manager: celestedg ms.author: mimart
-ms.date: 01/04/2021
+ms.date: 01/08/2021
ms.custom: mvc ms.topic: how-to ms.service: active-directory
@@ -41,7 +41,8 @@ You typically use only one identity provider in your applications, but you have
* [LinkedIn](identity-provider-linkedin.md) * [Microsoft Account](identity-provider-microsoft-account.md) * [QQ](identity-provider-qq.md)
-* [Salesforce](identity-provider-salesforce-saml.md)
+* [Salesforce](identity-provider-salesforce.md)
+* [Salesforce (SAML protocol)](identity-provider-salesforce-saml.md)
* [Twitter](identity-provider-twitter.md) * [WeChat](identity-provider-wechat.md) * [Weibo](identity-provider-weibo.md)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/microsoft-graph-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
@@ -12,6 +12,7 @@ ms.topic: reference
ms.date: 10/15/2020 ms.author: mimart ms.subservice: B2C
+ms.custom: fasttrack-edit
--- # Microsoft Graph operations available for Azure AD B2C
@@ -52,10 +53,10 @@ Manage the identity providers available to your user flows in your Azure AD B2C
Configure pre-built policies for sign-up, sign-in, combined sign-up and sign-in, password reset, and profile update. -- [List user flows](/graph/api/identityuserflow-list)-- [Create a user flow](/graph/api/identityuserflow-post-userflows)-- [Get a user flow](/graph/api/identityuserflow-get)-- [Delete a user flow](/graph/api/identityuserflow-delete)
+- [List user flows](/graph/api/identitycontainer-list-b2cuserflows)
+- [Create a user flow](/graph/api/identitycontainer-post-b2cuserflows)
+- [Get a user flow](/graph/api/b2cidentityuserflow-get)
+- [Delete a user flow](/graph/api/b2cidentityuserflow-delete)
## Custom policies
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-configurable-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
@@ -117,9 +117,9 @@ A token lifetime policy is a type of policy object that contains token lifetime
| --- | --- | --- | --- | --- | --- | | Refresh Token Max Inactive Time |MaxInactiveTime |Refresh tokens |90 days |10 minutes |90 days | | Single-Factor Refresh Token Max Age |MaxAgeSingleFactor |Refresh tokens (for any users) |Until-revoked |10 minutes |Until-revoked<sup>1</sup> |
-| Multi-Factor Refresh Token Max Age |MaxAgeMultiFactor |Refresh tokens (for any users) | 180 days |10 minutes |180 days<sup>1</sup> |
+| Multi-Factor Refresh Token Max Age |MaxAgeMultiFactor |Refresh tokens (for any users) | Until-revoked |10 minutes |180 days<sup>1</sup> |
| Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked |10 minutes |Until-revoked<sup>1</sup> |
-| Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) | 180 days |10 minutes | 180 days<sup>1</sup> |
+| Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) | Until-revoked |10 minutes | 180 days<sup>1</sup> |
* <sup>1</sup>365 days is the maximum explicit length that can be set for these attributes.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/protect-m365-from-on-premises-attacks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md new file mode 100644
@@ -0,0 +1,367 @@
+---
+title: Protecting Microsoft 365 from on-premises attacks
+description: Guidance on how to ensure an on-premises attack does not impact Microsoft 365
+services: active-directory
+author: BarbaraSelden
+manager: daveba
+ms.service: active-directory
+ms.workload: identity
+ms.subservice: fundamentals
+ms.topic: conceptual
+ms.date: 12/22/2020
+ms.author: baselden
+ms.reviewer: ajburnle
+ms.custom: "it-pro, seodec18"
+ms.collection: M365-identity-device-management
+---
+
+# Protecting Microsoft 365 from on-premises attacks
+
+Many customers connect their private corporate networks to Microsoft 365
+to benefit their users, devices, and applications. However, there are
+many well-documented ways these private networks can be compromised. Because Microsoft 365 acts as the "nervous system" for many organizations, it is critical to protect it from compromised on-premises infrastructure.
+
+This article shows you how to configure your systems to protect
+your Microsoft 365 cloud environment from on-premises compromise. We
+primarily focus on Azure AD tenant configuration settings, the ways
+Azure AD tenants can be safely connected to on-premises systems, and the
+tradeoffs required to operate your systems in ways that protect your
+cloud systems from on-premises compromise.
+
+We strongly recommend you implement this guidance to secure your
+Microsoft 365 cloud environment.
+> [!NOTE]
+> This article was initially published as a blog post. It has been moved here for longevity and maintenance. <br>
+To create an offline version of this article, use your browser's print to PDF functionality. Check back here frequently for updates.
+
+## Primary threat vectors from compromised on-premises environments
++
+Your Microsoft 365 cloud environment benefits from an extensive
+monitoring and security infrastructure. Using machine learning and human
+intelligence that looks across worldwide traffic can rapidly detect
+attacks and allow you to reconfigure in near-real-time. In hybrid
+deployments that connect on-premises infrastructure to Microsoft 365,
+many organizations delegate trust to on-premises components for critical
+authentication and directory object state management decisions.
+Unfortunately, if the on-premises environment is compromised, these
+trust relationships result in attackers' opportunities to compromise
+your Microsoft 365 environment.
+
+The two primary threat vectors are **federation trust relationships**
+and **account synchronization.** Both vectors can grant an attacker
+administrative access to your cloud.
+
+* **Federated trust relationships**, such as SAML authentication, are
+ used to authenticate to Microsoft 365 via your on-premises Identity
+ Infrastructure. If a SAML token signing certificate is compromised,
+ federation would allow anyone with that certificate to impersonate
+ any user in your cloud. **We recommend you disable federation trust
+ relationships for authentication to Microsoft 365 when possible.**
+
+* **Account synchronization** can be used to modify privileged users
+ (including their credentials) or groups granted administrative
+ privileges in Microsoft 365. **We recommend you ensure that
+ synchronized objects hold no privileges beyond a user in
+ Microsoft 365,** either directly or via inclusion in trusted roles
+ or groups. Ensure these objects have no direct or nested assignment
+ in trusted cloud roles or groups.
+
+## Protecting Microsoft 365 from on-premises compromise
++
+To address the threat vectors outlined above, we recommend you adhere to
+the principles illustrated below:
+
+![Reference architecture for protecting Microsoft 365 ](media/protect-m365/protect-m365-principles.png)
+
+* **Fully Isolate your Microsoft 365 administrator accounts.** They
+ should be
+
+ * Mastered in Azure AD.
+
+ * Authenticated with Multi-factor authentication (MFA).
+
+ * Secured by Azure AD conditional access.
+
+ * Accessed only by using Azure Managed Workstations.
+
+These are restricted use accounts. **There should be no on-premises accounts with administrative privileges in Microsoft 365.** For more information, see this [overview of Microsoft 365 administrator roles](https://docs.microsoft.com/microsoft-365/admin/add-users/about-admin-roles?view=o365-worldwide).
+Also see [Roles for Microsoft 365 in Azure Active Directory](../roles/m365-workload-docs.md).
+
+* **Manage devices from Microsoft 365.** Use Azure AD Join and
+ cloud-based mobile device management (MDM) to eliminate dependencies
+ on your on-premises device management infrastructure, which can
+ compromise device and security controls.
+
+* **No on-premises account has elevated privileges to Microsoft 365.**
+ Accounts accessing on-premises applications that require NTLM, LDAP,
+ or Kerberos authentication need an account in the organization's
+ on-premises identity infrastructure. Ensure that these accounts,
+ including service accounts, are not included in privileged cloud
+ roles or groups and that changes to these accounts cannot impact the
+ integrity of your cloud environment. Privileged on-premises software
+ must not be capable of impacting Microsoft 365 privileged accounts
+ or roles.
+
+* **Use Azure AD cloud authentication** to eliminate dependencies on
+ your on-premises credentials. Always use strong authentication,
+ such as Windows Hello, FIDO, the Microsoft Authenticator, or Azure
+ AD MFA.
+
+## Specific Recommendations
++
+The following sections provide specific guidance on how to implement the
+principles described above.
+
+### Isolate privileged identities
++
+In Azure AD, users with privileged roles such as administrators are the root of trust to build and manage the rest of the environment. Implement the following practices to minimize the impact of a compromise.
+
+* Use cloud-only accounts for Azure AD and Microsoft 365 privileged
+ roles.d
+
+* Deploy [privileged access devices](https://docs.microsoft.com/security/compass/privileged-access-devices#device-roles-and-profiles) for privileged access to manage Microsoft 365 and Azure AD.
+
+* Deploy [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM) for just in time (JIT) access to all human accounts that have privileged roles, and require strong authentication to activate roles.
+
+* Provide administrative roles the [least privilege possible to perform their tasks](../roles/delegate-by-task.md).
+
+* To enable a richer role assignment experience that includes delegation and multiple roles at the same time, consider using Azure AD security groups or Microsoft 365 Groups (collectively "cloud groups") and [enable role-based access control](../roles/groups-assign-role.md). You can also use [Administrative Units](../roles/administrative-units.md) to restrict the scope of roles to a portion of the organization.
+
+* Deploy [Emergency Access Accounts](../roles/security-emergency-access.md) and do NOT use on-premises password vaults to store credentials.
+
+For more information, see [Securing privileged access](https://aka.ms/SPA), which has detailed guidance on this topic. Also, see [Secure access practices for administrators in Azure AD](../roles/security-planning.md).
+
+### Use cloud authentication
+
+Credentials are a primary attack vector. Implement the following
+practices to make credentials more secure.
+
+* [Deploy passwordless authentication](../authentication/howto-authentication-passwordless-deployment.md): Reduce the use of passwords as much as possible by deploying passwordless credentials. These credentials are managed and
+ validated natively in the cloud. Choose from:
+
+ * [Windows Hello for business](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/passwordless-strategy)
+
+ * [Authenticator App](../authentication/howto-authentication-passwordless-phone.md)
+
+ * [FIDO2 security keys](../authentication/howto-authentication-passwordless-security-key-windows.md)
+
+* [Deploy Multi-Factor Authentication](https://aka.ms/deploymentplans/mfa): Provision
+ [multiple strong credentials using Azure AD MFA](../fundamentals/resilience-in-credentials.md). That way, access to cloud resources will require a credential that is managed in Azure AD in addition to an on-premises password that can be manipulated.
+
+ * For more information, see [Create a resilient access control management strategy with Azure active Directory](https://aka.ms/resilientaad).
+
+**Limitations and tradeoffs**
+
+* Hybrid account password management requires hybrid components such as password protection agents and password writeback agents. If your on-premises infrastructure is compromised, attackers can control the machines on which these agents reside. While this will not
+ compromise your cloud infrastructure, your cloud accounts will not protect these components from on-premises compromise.
+
+* On-premises accounts synced from Active Directory are marked to never expire in Azure AD, based on the assumption that on-premises AD password policies will mitigate this. If your on-premises AD is compromised and synchronization from AD connect needs to be disabled, you must set the option [EnforceCloudPasswordPolicyForPasswordSyncedUsers](../hybrid/how-to-connect-password-hash-synchronization.md).
+
+## Provision User Access from the Cloud
+
+Provisioning refers to the creation of user accounts and groups in applications or identity providers.
+
+![Diagram of provisioning architecture](media/protect-m365/protect-m365-provision.png)
+
+* **Provision from cloud HR apps to Azure AD:** This enables an on-premises compromise to be isolated without disrupting your Joiner-Mover-Leaver cycle from your cloud HR apps to Azure AD.
+
+* **Cloud Applications:** Where possible, deploy [Azure AD App
+ Provisioning](../app-provisioning/user-provisioning.md) as
+ opposed to on-premises provisioning solutions. This will protect
+ some of your SaaS apps from being poisoned with malicious user
+ profiles due to on-premises breaches.
+
+* **External Identities:** Use [Azure AD B2B
+ collaboration](../external-identities/what-is-b2b.md).
+ This will reduce the dependency on on-premises accounts for external
+ collaboration with partners, customers, and suppliers. Carefully
+ evaluate any direct federation with other identity providers. We
+ recommend limiting B2B guest accounts in the following ways.
+
+ * Limit guest access to browsing groups and other properties in
+ the directory. Use the external collaboration settings to restrict guest
+ ability to read groups they are not members of.
+
+ * Block access to the Azure portal. You can make rare necessary
+ exceptions. Create a Conditional Access policy that includes all guests
+ and external users and then [implement a policy to block
+ access](https://docs.microsoft.com/azure/role-based-access-control/conditional-access-azure-management.md).
+
+* **Disconnected Forests:** Use [Azure AD Cloud
+ Provisioning](../cloud-provisioning/what-is-cloud-provisioning.md). This enables you to connect to disconnected forests, eliminating the need to establish cross-forest connectivity or trusts, which can
+ broaden the impact of an on-premises breach. *
+
+**Limitations and Tradeoffs:**
+
+* When used to provision hybrid accounts, the Azure AD from cloud HR systems relies on on-premises synchronization to complete the data flow from AD to Azure AD. If synchronization is interrupted, new employee records will not be available in Azure AD.
+
+## Use cloud groups for collaboration and access
+
+Cloud groups allow you to decouple your collaboration and access from
+your on-premises infrastructure.
+
+* **Collaboration:** Use Microsoft 365 Groups and Microsoft Teams for
+ modern collaboration. Decommission on-premises distribution lists,
+ and [Upgrade distribution lists to Microsoft 365 Groups in
+ Outlook](https://docs.microsoft.com/office365/admin/manage/upgrade-distribution-lists?view=o365-worldwide).
+
+* **Access:** Use Azure AD security groups or Microsoft 365 Groups to
+ authorize access to applications in Azure AD.
+* **Office 365 licensing:** Use group-based licensing to provision to
+ Office 365 using cloud-only groups. This decouples control of group
+ membership from on-premises infrastructure.
+
+Owners of groups used for access should be considered privileged
+identities to avoid membership takeover from on-premises compromise.
+Take over includes direct manipulation of group membership on-premises
+or manipulation of on-premises attributes that can affect dynamic group
+membership in Microsoft 365.
+
+## Manage devices from the cloud
++
+Use Azure AD capabilities to securely manage devices.
+
+- **Use Windows 10 Workstations:** [Deploy Azure AD
+ Joined](../devices/azureadjoin-plan.md)
+ devices with MDM policies. Enable [Windows
+ Autopilot](https://docs.microsoft.com/mem/autopilot/windows-autopilot)
+ for a fully automated provisioning experience.
+
+ - Deprecate Windows 8.1 and earlier machines.
+
+ - Do not deploy Server OS machines as workstations.
+
+ - Use [Microsoft Intune](https://www.microsoft.com/en/microsoft-365/enterprise-mobility-security/microsoft-intune)
+ as the source of authority of all device management workloads.
+
+- [**Deploy privileged access devices**](https://docs.microsoft.com/security/compass/privileged-access-devices#device-roles-and-profiles)
+ for privileged access to manage Microsoft 365 and Azure AD.
+
+ ## Workloads, applications, and resources
+
+- **On-premises SSO systems:** Deprecate any on-premises federation
+ and Web Access Management infrastructure and configure applications
+ to use Azure AD.
+
+- **SaaS and LOB applications that support modern authentication
+ protocols:** [Use Azure AD for single
+ sign-on](../manage-apps/what-is-single-sign-on.md). The
+ more apps you configure to use Azure AD for authentication, the less
+ risk in the case of an on-premises compromise.
++
+* **Legacy Applications**
+
+ * Authentication, authorization, and remote access to legacy applications that do not support modern authentication can be enabled via [Azure AD Application Proxy](../manage-apps/application-proxy.md).They can also be enabled through a network or application delivery controller solution using [secure hybrid access partner integrations](../manage-apps/secure-hybrid-access.md).
+
+ * Choose a VPN vendor that supports modern authentication and integrate its authentication with Azure AD. In the case of anon-premises compromise, you can use Azure AD to disable or block access by disabling the VPN.
+
+* **Application and workload servers**
+
+ * Applications or resources that required servers can be migrated to Azure IaaS and use [Azure AD Domain Services](https://docs.microsoft.com/azure/active-directory-domain-services/overview) (Azure AD DS) to decouple trust and dependency on AD on-premises. To achieve this decoupling, virtual networks used for Azure AD DS should not have connection to corporate networks.
+
+ * Follow the guidance of the [credential tiering](https://aka.ms/TierModel). Application Servers are typically considered Tier 1 assets.
+
+ ## Conditional Access Policies
+
+Use Azure AD Conditional Access to interpret signals and make
+authentication decisions based on them. For more information, see the
+[Conditional Access deployment plan.](https://aka.ms/deploymentplans/ca)
+
+* [Legacy Authentication Protocols](../fundamentals/auth-sync-overview.md): Use Conditional Access to [block legacy authentication](../conditional-access/howto-conditional-access-policy-block-legacy.md) protocols whenever possible. Additionally, disable legacy authentication protocols at the application level using application-specific configuration.
+
+ * See specific details for [Exchange Online](https://docs.microsoft.com/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](https://docs.microsoft.com/powershell/module/sharepoint-online/set-spotenant?view=sharepoint-ps).
+
+* Implement the recommended [Identity and device access configurations.](https://docs.microsoft.com/microsoft-365/security/office-365-security/identity-access-policies?view=o365-worldwide)
+
+* If you are using a version of Azure AD that does not include Conditional Access, ensure that you are using the [Azure AD security defaults](../fundamentals/concept-fundamentals-security-defaults.md).
+
+ * For more information on Azure AD feature licensing, see the [Azure AD pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
+
+## Monitoring
+
+Once you have configured your environment to protect your Microsoft 365
+from an on-premises compromise, [proactively monitor](../reports-monitoring/overview-monitoring.md)
+the environment.
+### Scenarios to Monitor
+
+Monitor the following key scenarios, in addition to any scenarios
+specific to your organization. For example, you should proactively
+monitor access to your business-critical applications and resources.
+
+* **Suspicious activity**: All [Azure AD risk events](https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection#risk-detection-and-remediation) should be monitored for suspicious activity. [Azure AD Identity Protection](https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection) is natively integrated with Azure Security Center.
+
+ * Define the network [named locations](../reports-monitoring/quickstart-configure-named-locations.md) to avoid noisy detections on location-based signals.
+* **User Entity Behavioral Analytics (UEBA) alerts** Use UEBA
+ to get insights on anomaly detection.
+ * Microsoft Cloud App Discovery (MCAS) provides [UEBA in the cloud](https://docs.microsoft.com/cloud-app-security/tutorial-ueba).
+
+ * You can [integrate on-premises UEBA from Azure ATP](https://docs.microsoft.com/defender-for-identity/install-step2). MCAS reads signals from Azure AD Identity Protection.
+
+* **Emergency access accounts activity**: Any access using [emergency access accounts](../roles/security-emergency-access.md) should be monitored and alerts created for investigations. This monitoring must include:
+
+ * Sign-ins.
+
+ * Credential management.
+
+ * Any updates on group memberships.
+
+ * Application Assignments.
+* **Privileged role activity**: Configure and review
+ security [alerts generated by Azure AD PIM](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts?tabs=new#security-alerts).
+ Monitor direct assignment of privileged roles outside PIM by
+ generating alerts whenever a user is assigned directly.
+* **Azure AD tenant-wide configurations**: Any change to tenant-wide configurations should generate alerts in the system. These include but are not limited to
+ * Updating custom domains
+
+ * Azure AD B2B allow/block list changes.
+ * Azure AD B2B allowed identity providers (SAML IDPs through direct federation or social logins).
+ * Conditional Access or Risk policy changes
+
+* **Application and service principal objects**:
+ * New applications or service principals that might require Conditional Access policies.
+
+ * Additional credentials added to service principals.
+ * Application consent activity.
+
+* **Custom roles**:
+ * Updates of the custom role definitions.
+
+ * New custom roles created.
+
+### Log Management
+
+Define a log storage and retention strategy, design, and implementation to facilitate a consistent toolset such as SIEM systems like Azure Sentinel, common queries, and investigation and forensics playbooks.
+
+* **Azure AD Logs** Ingest logs and signal produced following consistent best practices, including diagnostics settings, log retention, and SIEM ingestion. The log strategy must include the following Azure AD logs:
+ * Sign-in activity
+
+ * Audit logs
+
+ * Risk events
+
+Azure AD provides [Azure Monitor integration](../reports-monitoring/concept-activity-logs-azure-monitor.md) for the sign-in activity log and audit logs. Risk events can be ingested through [Microsoft Graph API](https://aka.ms/AzureADSecuredAzure/32b). You can [stream Azure AD logs to Azure monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+
+* **Hybrid Infrastructure OS Security Logs.** All hybrid identity infrastructure OS logs should be archived and carefully monitored as a <br>Tier 0 system, given the surface area implications. This includes:
+
+ * Azure AD Connect. [Azure AD Connect Health](https://aka.ms/AzureADSecuredAzure/32e) must be deployed to monitor identity synchronization.
+
+ * Application Proxy Agents
++
+ * Password write-back agents
+
+ * Password Protection Gateway machines
+
+ * NPS that have the Azure MFA RADIUS extension
+
+## Next Steps
+* [Build resilience into identity and access management with Azure AD](resilience-overview.md)
+
+* [Secure external access to resources](secure-external-access-resources.md)
+* [Integrate all your apps with Azure AD](five-steps-to-full-application-integration-with-azure-ad.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/secure-external-access-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/secure-external-access-resources.md
@@ -1,5 +1,5 @@
---
-title: Securing external access to resources in Azure Active Directory
+title: Securing external collaboration in Azure Active Directory
description: A guide for architects and IT administrators on securing external access to internal resources services: active-directory author: BarbaraSelden
@@ -15,7 +15,7 @@ ms.custom: "it-pro, seodec18"
ms.collection: M365-identity-device-management ---
-# Securing external access to resources
+# Securing external collaboration in Azure Active Directory and Microsoft 365
Secure collaboration with external partners ensures that the right external partners have appropriate access to internal resources for the right length of time. Through a holistic governance approach, you can reduce security risks, meet compliance goals, and ensure that you know who has access.
@@ -38,7 +38,7 @@ This document set is designed to enable you to move from ad hoc or loosely gover
See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
-1. [Determine your desired security posture for external access](1-secure-access-posture.md)
+1. [Determine your security posture for external access](1-secure-access-posture.md)
2. [Discover your current state](2-secure-access-current-state.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-organization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-organization.md
@@ -135,8 +135,6 @@ If you no longer have a relationship with an external Azure AD directory or doma
1. In the connected organization's overview pane, select **Delete** to delete it.
- Currently, you can delete a connected organization only if there are no connected users.
- ![The connected organization Delete button](./media/entitlement-management-organization/organization-delete.png) ## Managing a connected organization programmatically
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/adpfederatedsso-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/adpfederatedsso-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/26/2019
+ms.date: 12/24/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate ADP with Azure Active Directory
* Enable your users to be automatically signed-in to ADP with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -43,18 +42,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of ADP into Azure AD, you need to add ADP from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **ADP** in the search box. 1. Select **ADP** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for ADP
+## Configure and test Azure AD SSO for ADP
Configure and test Azure AD SSO with ADP using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ADP.
-To configure and test Azure AD SSO with ADP, complete the following building blocks:
+To configure and test Azure AD SSO with ADP, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -79,9 +78,9 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
d. Set the **Visible to users** field value to **No**.
-1. In the [Azure portal](https://portal.azure.com/), on the **ADP** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **ADP** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -117,15 +116,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **ADP**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure ADP SSO
@@ -141,7 +134,7 @@ To configure single sign-on on **ADP** side, you need to upload the downloaded *
> Your employees who require federated access to your ADP services must be assigned to the ADP service app and subsequently, users must be reassigned to the specific ADP service. Upon receipt of confirmation from your ADP representative, configure your ADP service(s) and assign/manage users to control user access to the specific ADP service.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -157,7 +150,7 @@ Upon receipt of confirmation from your ADP representative, configure your ADP se
1. Set the **Visible to users** field value to **Yes**.
-1. In the [Azure portal](https://portal.azure.com/), on the **ADP** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **ADP** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** dialog, select **Mode** as **Linked**. to link your application to **ADP**.
@@ -207,14 +200,13 @@ The objective of this section is to create a user called B.Simon in ADP. Work wi
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ADP tile in the Access Panel, you should be automatically signed in to the ADP for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on Test this application in Azure portal and you should be automatically signed in to the ADP for which you set up the SSO
-## Additional resources
+* You can use Microsoft My Apps. When you click the ADP tile in the My Apps, you should be automatically signed in to the ADP for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure ADP you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/box-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/box-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 03/24/2020
+ms.date: 01/05/2021
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Box with Azure Active Directory
* Enable your users to be automatically signed-in to Box with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -40,24 +38,26 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Box supports **SP** initiated SSO * Box supports [**Automated** user provisioning and deprovisioning](./box-userprovisioning-tutorial.md) (recommended) * Box supports **Just In Time** user provisioning
-* Once you configure Box you can enforce Session Control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session Control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
## Adding Box from the gallery To configure the integration of Box into Azure AD, you need to add Box from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Box** in the search box. 1. Select **Box** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Box
+## Configure and test Azure AD SSO for Box
Configure and test Azure AD SSO with Box using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Box.
-To configure and test Azure AD SSO with Box, complete the following building blocks:
+To configure and test Azure AD SSO with Box, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -70,7 +70,7 @@ To configure and test Azure AD SSO with Box, complete the following building blo
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Box** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Box** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
@@ -84,7 +84,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Identifier (Entity ID)** text box, type a URL: `box.net`
- c. In the **Reply URL** text box, type a URL:
+ c. In the **Reply URL** text box, type the URL:
`https://sso.services.box.net/sp/ACS.saml2` > [!NOTE]
@@ -114,19 +114,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **Box**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Box SSO
-To configure SSO for your application, follow the procedure in [Set up SSO on your own](https://community.box.com/t5/How-to-Guides-for-Admins/Setting-Up-Single-Sign-On-SSO-for-your-Enterprise/ta-p/1263#ssoonyourown).
+1. To automate the configuration within Box, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Box** will direct you to the Box application. From there, provide the admin credentials to sign into Box. The browser extension will automatically configure the application for you and automate step 3.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Box manually, in a different web browser window, sign in to your Box company site as an administrator and follow the procedure in [Set up SSO on your own](https://community.box.com/t5/How-to-Guides-for-Admins/Setting-Up-Single-Sign-On-SSO-for-your-Enterprise/ta-p/1263#ssoonyourown).
> [!NOTE] > If you are unable to configure the SSO settings for your Box account, you need to send the downloaded **Federation Metadata XML** to [Box support team](https://community.box.com/t5/custom/page/page-id/submit_sso_questionaire). They set this setting to have the SAML SSO connection set properly on both sides.
@@ -140,20 +144,15 @@ In this section, a user called Britta Simon is created in Box. Box supports just
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Box tile in the Access Panel, you should be automatically signed in to the Box for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to Box Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to Box Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the Box tile in the My Apps, this will redirect to Box Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try Box with Azure AD](https://aad.portal.azure.com/) -- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Box with advanced visibility and controls](/cloud-app-security/protect-box)\ No newline at end of file
+Once you configure Box you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/linkedinlearning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/linkedinlearning-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 01/31/2020
+ms.date: 12/28/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate LinkedIn Learning with Azure Act
* Enable your users to be automatically signed-in to LinkedIn Learning with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,13 +34,13 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* LinkedIn Learning supports **SP and IDP** initiated SSO * LinkedIn Learning supports **Just In Time** user provisioning
-* Once you configure LinkedIn Learning you can enforce Session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+ ## Adding LinkedIn Learning from the gallery To configure the integration of LinkedIn Learning into Azure AD, you need to add LinkedIn Learning from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -50,11 +48,11 @@ To configure the integration of LinkedIn Learning into Azure AD, you need to add
1. Select **LinkedIn Learning** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for LinkedIn Learning
+## Configure and test Azure AD SSO for LinkedIn Learning
Configure and test Azure AD SSO with LinkedIn Learning using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in LinkedIn Learning.
-To configure and test Azure AD SSO with LinkedIn Learning, complete the following building blocks:
+To configure and test Azure AD SSO with LinkedIn Learning, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -67,9 +65,9 @@ To configure and test Azure AD SSO with LinkedIn Learning, complete the followin
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **LinkedIn Learning** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **LinkedIn Learning** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -119,15 +117,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **LinkedIn Learning**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure LinkedIn Learning SSO
@@ -158,18 +150,21 @@ LinkedIn Learning Application supports Just in time user provisioning and after
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to LinkedIn Learning Sign on URL where you can initiate the login flow.
-When you click the LinkedIn Learning tile in the Access Panel, you should be automatically signed in to the LinkedIn Learning for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to LinkedIn Learning Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the LinkedIn Learning for which you set up the SSO
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the LinkedIn Learning tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the LinkedIn Learning for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md) -- [Try LinkedIn Learning with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure LinkedIn Learning you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/tableauserver-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tableauserver-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 05/07/2020
+ms.date: 12/27/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate Tableau Server with Azure Active
* Enable your users to be automatically signed-in to Tableau Server with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -35,24 +34,23 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Tableau Server supports **SP** initiated SSO
-* Once you configure Tableau Server you can enforce Session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
## Adding Tableau Server from the gallery To configure the integration of Tableau Server into Azure AD, you need to add Tableau Server from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Tableau Server** in the search box. 1. Select **Tableau Server** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Tableau Server
+## Configure and test Azure AD SSO for Tableau Server
Configure and test Azure AD SSO with Tableau Server using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Tableau Server.
-To configure and test Azure AD SSO with Tableau Server, complete the following building blocks:
+To configure and test Azure AD SSO with Tableau Server, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -65,9 +63,9 @@ To configure and test Azure AD SSO with Tableau Server, complete the following b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Tableau Server** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Tableau Server** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -112,15 +110,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Tableau Server**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Tableau Server SSO
@@ -165,18 +157,15 @@ That username of the user should match the value which you have configured in th
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Tableau Server tile in the Access Panel, you should be automatically signed in to the Tableau Server for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Tableau Server Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Tableau Server Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Tableau Server tile in the My Apps, this will redirect to Tableau Server Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md) -- [Try Tableau Server with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure the Tableau Server you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/upshotly-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/upshotly-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 1/7/2020
+ms.date: 1/5/2021
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Upshotly with Azure Active Direc
* Enable your users to be automatically signed-in to Upshotly with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -40,33 +38,33 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Upshotly into Azure AD, you need to add Upshotly from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Upshotly** in the search box. 1. Select **Upshotly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Upshotly
+## Configure and test Azure AD SSO for Upshotly
Configure and test Azure AD SSO with Upshotly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Upshotly.
-To configure and test Azure AD SSO with Upshotly, complete the following building blocks:
+To configure and test Azure AD SSO with Upshotly, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Upshotly SSO](#configure-upshotly-sso)** - to configure the single sign-on settings on application side.
- * **[Create Upshotly test user](#create-upshotly-test-user)** - to have a counterpart of B.Simon in Upshotly that is linked to the Azure AD representation of user.
+ 1. **[Create Upshotly test user](#create-upshotly-test-user)** - to have a counterpart of B.Simon in Upshotly that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Upshotly** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Upshotly** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the edit/pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -108,19 +106,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **Upshotly**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Upshotly SSO
-1. In a different web browser window, sign in to your Upshotly company site as an administrator.
+1. To automate the configuration within Upshotly, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Upshotly** will direct you to the Upshotly application. From there, provide the admin credentials to sign into Upshotly. The browser extension will automatically configure the application for you and automate steps 3-4.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Upshotly manually, in a different web browser window, sign in to your Upshotly company site as an administrator.
1. Click on the **User Profile** and navigate to **Admin > SSO** and perform the following steps:
@@ -136,16 +138,20 @@ In this section, you create a user called B.Simon in Upshotly Edge Cloud. Work w
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Upshotly Sign on URL where you can initiate the login flow.
-When you click the Upshotly tile in the Access Panel, you should be automatically signed in to the Upshotly for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Upshotly Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Upshotly for which you set up the SSO
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Upshotly tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Upshotly for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Upshotly with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Upshotly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/whimsical-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/whimsical-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/20/2020
+ms.date: 01/05/2021
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Whimsical with Azure Active Dire
* Enable your users to be automatically signed-in to Whimsical with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,7 +34,6 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Whimsical supports **SP and IDP** initiated SSO * Whimsical supports **Just In Time** user provisioning
-* Once you configure Whimsical you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
@@ -45,7 +42,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Whimsical into Azure AD, you need to add Whimsical from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -56,7 +53,7 @@ To configure the integration of Whimsical into Azure AD, you need to add Whimsic
Configure and test Azure AD SSO with Whimsical using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Whimsical.
-To configure and test Azure AD SSO with Whimsical, complete the following building blocks:
+To configure and test Azure AD SSO with Whimsical, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +66,9 @@ To configure and test Azure AD SSO with Whimsical, complete the following buildi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Whimsical** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Whimsical** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the edit/pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -127,21 +124,27 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **Whimsical**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Whimsical SSO
-To configure single sign-on on the **Whimsical** side, you need to upload the **Federation Metadata XML** you just downloaded to your [workspace settings](https://whimsical.com/workspace/settings).
+1. To automate the configuration within Whimsical, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Whimsical** will direct you to the Whimsical application. From there, provide the admin credentials to sign into Whimsical. The browser extension will automatically configure the application for you and automate steps 3-4.
-![Whimsical Workspace SAML setup](media/whimsical-tutorial/saml-setup.png)
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Whimsical manually, in a different web browser window, sign in to your Whimsical company site as an administrator.
+
+4. To configure single sign-on on the **Whimsical** side, you need to upload the **Federation Metadata XML** you just downloaded to your [workspace settings](https://whimsical.com/workspace/settings).
+
+ ![Whimsical Workspace SAML setup](media/whimsical-tutorial/saml-setup.png)
Uploading the **Federation Metadata XML** should be the only step you need to take in Whimsical to set up the SAML SSO connection.
@@ -151,18 +154,20 @@ In this section, a user called Britta Simon is created in Whimsical. Whimsical s
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
-When you click the Whimsical tile in the Access Panel, you should be automatically signed in to the Whimsical for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Whimsical Sign on URL where you can initiate the login flow.
-## Additional resources
+* Go to Whimsical Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+#### IDP initiated:
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Whimsical for which you set up the SSO
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Whimsical tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Whimsical for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try Whimsical with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure Whimsical you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/whosoffice-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/whosoffice-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 02/21/2020
+ms.date: 01/05/2021
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate WhosOffice with Azure Active Dir
* Enable your users to be automatically signed-in to WhosOffice with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -35,7 +33,6 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * WhosOffice supports **SP and IDP** initiated SSO
-* Once you configure WhosOffice you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
@@ -44,7 +41,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of WhosOffice into Azure AD, you need to add WhosOffice from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -52,26 +49,26 @@ To configure the integration of WhosOffice into Azure AD, you need to add WhosOf
1. Select **WhosOffice** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for WhosOffice
+## Configure and test Azure AD SSO for WhosOffice
Configure and test Azure AD SSO with WhosOffice using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in WhosOffice.
-To configure and test Azure AD SSO with WhosOffice, complete the following building blocks:
+To configure and test Azure AD SSO with WhosOffice, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure WhosOffice SSO](#configure-whosoffice-sso)** - to configure the single sign-on settings on application side.
- * **[Create WhosOffice test user](#create-whosoffice-test-user)** - to have a counterpart of B.Simon in WhosOffice that is linked to the Azure AD representation of user.
+ 1. **[Create WhosOffice test user](#create-whosoffice-test-user)** - to have a counterpart of B.Simon in WhosOffice that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **WhosOffice** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **WhosOffice** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the edit/pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -116,19 +113,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **WhosOffice**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure WhosOffice SSO
-1. In a different web browser window, sign into WhosOffice website as an administrator.
+1. To automate the configuration within WhosOffice, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up WhosOffice** will direct you to the WhosOffice application. From there, provide the admin credentials to sign into WhosOffice. The browser extension will automatically configure the application for you and automate steps 3-7.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup WhosOffice manually, in a different web browser window, sign in to your WhosOffice company site as an administrator.
1. Click on **Settings** and select **Company**.
@@ -164,18 +165,20 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
-When you click the WhosOffice tile in the Access Panel, you should be automatically signed in to the WhosOffice for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to WhosOffice Sign on URL where you can initiate the login flow.
-## Additional resources
+* Go to WhosOffice Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+#### IDP initiated:
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the WhosOffice for which you set up the SSO
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the WhosOffice tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the WhosOffice for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try WhosOffice with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure WhosOffice you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/workplacebyfacebook-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workplacebyfacebook-tutorial.md
@@ -78,14 +78,14 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<instancename>.facebook.com`
+ a. In the **Sign on URL** (found in WorkPlace as the Recipient URL) text box, type a URL using the following pattern:
+ `https://.workplace.com/work/saml.php`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://www.facebook.com/company/<instanceID>`
+ b. In the **Identifier (Entity ID)** (found in WorkPlace as the Audience URL) text box, type a URL using the following pattern:
+ `https://www.workplace.com/company/`
- c. In the **Reply URL** text box, type a URL using the following pattern:
- `https://www.facebook.com/company/<instanceID>`
+ c. In the **Reply URL** (found in WorkPlace as the Assertion Consumer Service) text box, type a URL using the following pattern:
+ `https://.workplace.com/work/saml.php`
> [!NOTE] > These values are not the real. Update these values with the actual Sign-On URL, Identifier and Reply URL. See the Authentication page of the Workplace Company Dashboard for the correct values for your Workplace community, this is explained later in the tutorial.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/multi-factor-authentication-end-user-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/multi-factor-authentication-end-user-troubleshoot.md
@@ -70,7 +70,7 @@ Not receiving your verification code is a common problem. The problem is typical
Try this | Guidance info --------- | ------------
-Use the Microsoft authenticator app or Verification codes | You are getting ΓÇ£You've hit our limit on verification callsΓÇ¥ or ΓÇ£YouΓÇÖve hit our limit on text verification codesΓÇ¥ error messages during sign-in. <br/><br/>Microsoft may limit repeated authentication attempts that are perform by the same user in a short period of time. This limitation does not apply to the Microsoft Authenticator or verification code. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
+Use the Microsoft authenticator app or Verification codes | You are getting ΓÇ£You've hit our limit on verification callsΓÇ¥ or ΓÇ£YouΓÇÖve hit our limit on text verification codesΓÇ¥ error messages during sign-in. <br/><br/>Microsoft may limit repeated authentication attempts that are perform by the same user in a short period of time. This limitation does not apply to the Microsoft Authenticator or verification code. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes. <br/><br/> You are getting "Sorry, we're having trouble verifying your account" error message during sign-in. <br/><br/> Microsoft may limit or block voice or SMS authentication attempts that are performed by the same user, phone number, or organization due to high number of failed voice or SMS authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support.
Restart your mobile device | Sometimes your device just needs a refresh. When you restart your device, all background processes and services are ended. The restart also shuts down the core components of your device. Any service or component is refreshed when you restart your device. Verify your security information is correct | Make sure your security verification method information is accurate, especially your phone numbers. If you put in the wrong phone number, all of your alerts will go to that incorrect number. Fortunately, that user won't be able to do anything with the alerts, but it also won't help you sign in to your account. To make sure your information is correct, see the instructions in the [Manage your two-factor verification method settings](multi-factor-authentication-end-user-manage-settings.md) article. Verify your notifications are turned on | Make sure your mobile device has notifications turned on. Ensure the following notification modes are allowed: <br/><br/> &bull; Phone calls <br/> &bull; Your authentication app <br/> &bull; Your text messaging app <br/><br/> Ensure these modes create an alert that is _visible_ on your device.
@@ -128,4 +128,4 @@ If you've tried these steps but are still running into problems, contact your or
- [Set up my account for two-step verification](multi-factor-authentication-end-user-first-time.md) -- [Microsoft Authenticator app FAQ](user-help-auth-app-faq.md)\ No newline at end of file
+- [Microsoft Authenticator app FAQ](user-help-auth-app-faq.md)
aks https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-container-registry-integration.md
@@ -4,7 +4,7 @@ description: Learn how to integrate Azure Kubernetes Service (AKS) with Azure Co
services: container-service manager: gwallace ms.topic: article
-ms.date: 02/25/2020
+ms.date: 01/08/2021
---
@@ -14,6 +14,9 @@ When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (
You can set up the AKS to ACR integration in a few simple commands with the Azure CLI. This integration assigns the AcrPull role to the service principal associated to the AKS Cluster.
+> [!NOTE]
+> This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][Image Pull Secret].
+ ## Before you begin These examples require:
@@ -148,3 +151,4 @@ nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
<!-- LINKS - external --> [AKS AKS CLI]: /cli/azure/aks?view=azure-cli-latest#az-aks-create
+[Image Pull secret]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
\ No newline at end of file
aks https://docs.microsoft.com/en-us/azure/aks/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
aks https://docs.microsoft.com/en-us/azure/aks/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample ms.service: container-service ms.custom: subject-policy-compliancecontrols
aks https://docs.microsoft.com/en-us/azure/aks/uptime-sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/uptime-sla.md
@@ -3,7 +3,7 @@ title: Azure Kubernetes Service (AKS) with Uptime SLA
description: Learn about the optional Uptime SLA offering for the Azure Kubernetes Service (AKS) API Server. services: container-service ms.topic: conceptual
-ms.date: 06/24/2020
+ms.date: 01/08/2021
ms.custom: references_regions, devx-track-azurecli ---
@@ -21,7 +21,7 @@ Customers can still create unlimited free clusters with a service level objectiv
## Region availability * Uptime SLA is available in public regions and Azure Government regions where [AKS is supported](https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service).
-* Uptime SLA is available for [private AKS clusters][private-clusters] in all regions where AKS is supported.
+* Uptime SLA is available for [private AKS clusters][private-clusters] in all public regions where AKS is supported.
## SLA terms and conditions
api-management https://docs.microsoft.com/en-us/azure/api-management/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
author: georgewallace ms.author: gwallace ms.service: api-management
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-web-nodejs-best-practices-and-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-nodejs-best-practices-and-troubleshoot-guide.md
@@ -240,9 +240,8 @@ Your application is throwing uncaught exceptions ΓÇô Check `d:\\home\\LogFiles\\
The common cause for long application start times is a high number of files in the node\_modules. The application tries to load most of these files when starting. By default, since your files are stored on the network share on Azure App Service, loading many files can take time. Some solutions to make this process faster are:
-1. Be sure you have a flat dependency structure and no duplicate dependencies by using npm3 to install your modules.
-2. Try to lazy load your node\_modules and not load all of the modules at application start. To Lazy load modules, the call to require(ΓÇÿmoduleΓÇÖ) should be made when you actually need the module within the function before the first execution of module code.
-3. Azure App Service offers a feature called local cache. This feature copies your content from the network share to the local disk on the VM. Since the files are local, the load time of node\_modules is much faster.
+1. Try to lazy load your node\_modules and not load all of the modules at application start. To Lazy load modules, the call to require(ΓÇÿmoduleΓÇÖ) should be made when you actually need the module within the function before the first execution of module code.
+2. Azure App Service offers a feature called local cache. This feature copies your content from the network share to the local disk on the VM. Since the files are local, the load time of node\_modules is much faster.
## IISNODE http status and substatus
app-service https://docs.microsoft.com/en-us/azure/app-service/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
app-service https://docs.microsoft.com/en-us/azure/app-service/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample ms.service: app-service ms.custom: subject-policy-compliancecontrols
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-autoscaling-zone-redundant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-autoscaling-zone-redundant.md
@@ -96,7 +96,7 @@ This section describes features and limitations of the v2 SKU that differ from t
|Authentication certificate|Not supported.<br>For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku).| |Mixing Standard_v2 and Standard Application Gateway on the same subnet|Not supported| |User-Defined Route (UDR) on Application Gateway subnet|Supported (specific scenarios). In preview.<br> For more information about supported scenarios, see [Application Gateway configuration overview](configuration-infrastructure.md#supported-user-defined-routes).|
-|NSG for Inbound port range| - 65200 to 65535 for Standard_v2 SKU<br>- 65503 to 65534 for Standard SKU.<br>For more information, see the [FAQ](application-gateway-faq.md#are-network-security-groups-supported-on-the-application-gateway-subnet).|
+|NSG for Inbound port range| - 65200 to 65535 for Standard_v2 SKU<br>- 65503 to 65534 for Standard SKU.<br>For more information, see the [FAQ](application-gateway-faq.yml#are-network-security-groups-supported-on-the-application-gateway-subnet).|
|Performance logs in Azure diagnostics|Not supported.<br>Azure metrics should be used.| |Billing|Billing scheduled to start on July 1, 2019.| |FIPS mode|These are currently not supported.|
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-faq-md https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-faq-md.md new file mode 100644
@@ -0,0 +1,477 @@
+---
+title: Frequently asked questions about Azure Application Gateway
+description: Find answers to frequently asked questions about Azure Application Gateway.
+services: application-gateway
+author: vhorne
+ms.service: application-gateway
+ms.topic: article
+ms.date: 05/26/2020
+ms.author: victorh
+ms.custom: references_regions
+---
+
+# Frequently asked questions about Application Gateway
+
+[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
+
+The following are common questions asked about Azure Application Gateway.
+
+## General
+
+### What is Application Gateway?
+
+Azure Application Gateway provides an application delivery controller (ADC) as a service. It offers various layer 7 load-balancing capabilities for your applications. This service is highly available, scalable, and fully managed by Azure.
+
+### What features does Application Gateway support?
+
+Application Gateway supports autoscaling, TLS offloading, and end-to-end TLS, a web application firewall (WAF), cookie-based session affinity, URL path-based routing, multisite hosting, and other features. For a full list of supported features, see [Introduction to Application Gateway](./overview.md).
+
+### How do Application Gateway and Azure Load Balancer differ?
+
+Application Gateway is a layer 7 load balancer, which means it works only with web traffic (HTTP, HTTPS, WebSocket, and HTTP/2). It supports capabilities such as TLS termination, cookie-based session affinity, and round robin for load-balancing traffic. Load Balancer load-balances traffic at layer 4 (TCP or UDP).
+
+### What protocols does Application Gateway support?
+
+Application Gateway supports HTTP, HTTPS, HTTP/2, and WebSocket.
+
+### How does Application Gateway support HTTP/2?
+
+See [HTTP/2 support](./configuration-listeners.md#http2-support).
+
+### What resources are supported as part of a backend pool?
+
+See [supported backend resources](./application-gateway-components.md#backend-pools).
+
+### In what regions is Application Gateway available?
+
+Application Gateway v1 (Standard and WAF) is available in all regions of global Azure. It's also available in [Azure China 21Vianet](https://www.azure.cn/) and [Azure Government](https://azure.microsoft.com/overview/clouds/government/).
+
+For Application Gateway v2 (Standard_v2 and WAF_v2) availability, see [supported regions for Application Gateway v2](./application-gateway-autoscaling-zone-redundant.md#supported-regions)
+
+### Is this deployment dedicated for my subscription, or is it shared across customers?
+
+Application Gateway is a dedicated deployment in your virtual network.
+
+### Does Application Gateway support HTTP-to-HTTPS redirection?
+
+Redirection is supported. See [Application Gateway redirect overview](./redirect-overview.md).
+
+### In what order are listeners processed?
+
+See the [order of listener processing](./configuration-listeners.md#order-of-processing-listeners).
+
+### Where do I find the Application Gateway IP and DNS?
+
+If you're using a public IP address as an endpoint, you'll find the IP and DNS information on the public IP address resource. Or find it in the portal, on the overview page for the application gateway. If you're using internal IP addresses, find the information on the overview page.
+
+For the v2 SKU, open the public IP resource and select **Configuration**. The **DNS name label (optional)** field is available to configure the DNS name.
+
+### What are the settings for Keep-Alive timeout and TCP idle timeout?
+
+*Keep-Alive timeout* governs how long the Application Gateway will wait for a client to send another HTTP request on a persistent connection before reusing it or closing it. *TCP idle timeout* governs how long a TCP connection is kept open in case of no activity.
+
+The *Keep-Alive timeout* in the Application Gateway v1 SKU is 120 seconds and in the v2 SKU it's 75 seconds. The *TCP idle timeout* is a 4-minute default on the frontend virtual IP (VIP) of both v1 and v2 SKU of Application Gateway. You can configure the TCP idle timeout value on v1 and v2 Application Gateways to be anywhere between 4 minutes and 30 minutes. For both v1 and v2 Application Gateways, you'll need to navigate to the public IP of the Application Gateway and change the TCP idle timeout under the "Configuration" blade of the public IP on Portal. You can set the TCP idle timeout value of the public IP through PowerShell by running the following commands:
+
+```azurepowershell-interactive
+$publicIP = Get-AzPublicIpAddress -Name MyPublicIP -ResourceGroupName MyResourceGroup
+$publicIP.IdleTimeoutInMinutes = "15"
+Set-AzPublicIpAddress -PublicIpAddress $publicIP
+```
+
+### Does the IP or DNS name change over the lifetime of the application gateway?
+
+In Application Gateway V1 SKU, the VIP can change if you stop and start the application gateway. But the DNS name associated with the application gateway doesn't change over the lifetime of the gateway. Because the DNS name doesn't change, you should use a CNAME alias and point it to the DNS address of the application gateway. In Application Gateway V2 SKU, you can set the IP address as static, so IP and DNS name will not change over the lifetime of the application gateway.
+
+### Does Application Gateway support static IP?
+
+Yes, the Application Gateway v2 SKU supports static public IP addresses. The v1 SKU supports static internal IPs.
+
+### Does Application Gateway support multiple public IPs on the gateway?
+
+An application gateway supports only one public IP address.
+
+### How large should I make my subnet for Application Gateway?
+
+See [Application Gateway subnet size considerations](./configuration-infrastructure.md#size-of-the-subnet).
+
+### Can I deploy more than one Application Gateway resource to a single subnet?
+
+Yes. In addition to multiple instances of a given Application Gateway deployment, you can provision another unique Application Gateway resource to an existing subnet that contains a different Application Gateway resource.
+
+A single subnet can't support both v2 and v1 Application Gateway SKUs.
+
+### Does Application Gateway v2 support user-defined routes (UDR)?
+
+Yes, but only specific scenarios. For more information, see [Application Gateway infrastructure configuration](configuration-infrastructure.md#supported-user-defined-routes).
+
+### Does Application Gateway support x-forwarded-for headers?
+
+Yes. See [Modifications to a request](./how-application-gateway-works.md#modifications-to-the-request).
+
+### How long does it take to deploy an application gateway? Will my application gateway work while it's being updated?
+
+New Application Gateway v1 SKU deployments can take up to 20 minutes to provision. Changes to instance size or count aren't disruptive, and the gateway remains active during this time.
+
+Most deployments that use the v2 SKU take around 6 minutes to provision. However it can take longer depending on the type of deployment. For example, deployments across multiple Availability Zones with many instances can take more than 6 minutes.
+
+### Can I use Exchange Server as a backend with Application Gateway?
+
+No. Application Gateway doesn't support email protocols such as SMTP, IMAP, and POP3.
+
+### Is there guidance available to migrate from the v1 SKU to the v2 SKU?
+
+Yes. For details see, [Migrate Azure Application Gateway and Web Application Firewall from v1 to v2](migrate-v1-v2.md).
+
+### Will the Application Gateway v1 SKU continue to be supported?
+
+Yes. The Application Gateway v1 SKU will continue to be supported. However, it is strongly recommended that you move to v2 to take advantage of the feature updates in that SKU. For more information, see [Autoscaling and Zone-redundant Application Gateway v2](application-gateway-autoscaling-zone-redundant.md).
+
+### Does Application Gateway V2 support proxying requests with NTLM authentication?
+
+No. Application Gateway V2 doesn't support proxying requests with NTLM authentication.
+
+### Does Application Gateway affinity cookie support SameSite attribute?
+Yes, the [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) introduced a mandate on HTTP cookies without SameSite attribute to be treated as SameSite=Lax. This means that the Application Gateway affinity cookie won't be sent by the browser in a third-party context.
+
+To support this scenario, Application Gateway injects another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. These cookies are similar, but the *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it: *SameSite=None; Secure*. These attributes maintain sticky sessions even for cross-origin requests. See the [cookie based affinity section](configuration-http-settings.md#cookie-based-affinity) for more information.
+
+## Performance
+
+### How does Application Gateway support high availability and scalability?
+
+The Application Gateway v1 SKU supports high-availability scenarios when you've deployed two or more instances. Azure distributes these instances across update and fault domains to ensure that instances don't all fail at the same time. The v1 SKU supports scalability by adding multiple instances of the same gateway to share the load.
+
+The v2 SKU automatically ensures that new instances are spread across fault domains and update domains. If you choose zone redundancy, the newest instances are also spread across availability zones to offer zonal failure resiliency.
+
+### How do I achieve a DR scenario across datacenters by using Application Gateway?
+
+Use Traffic Manager to distribute traffic across multiple application gateways in different datacenters.
+
+### Does Application Gateway support autoscaling?
+
+Yes, the Application Gateway v2 SKU supports autoscaling. For more information, see [Autoscaling and Zone-redundant Application Gateway](application-gateway-autoscaling-zone-redundant.md).
+
+### Does manual or automatic scale up or scale down cause downtime?
+
+No. Instances are distributed across upgrade domains and fault domains.
+
+### Does Application Gateway support connection draining?
+
+Yes. You can set up connection draining to change members within a backend pool without disruption. For more information, see [connection draining section of Application Gateway](features.md#connection-draining).
+
+### Can I change instance size from medium to large without disruption?
+
+Yes.
+
+## Configuration
+
+### Is Application Gateway always deployed in a virtual network?
+
+Yes. Application Gateway is always deployed in a virtual network subnet. This subnet can contain only application gateways. For more information, see [virtual network and subnet requirements](./configuration-infrastructure.md#virtual-network-and-dedicated-subnet).
+
+### Can Application Gateway communicate with instances outside of its virtual network or outside of its subscription?
+
+As long as you have IP connectivity, Application Gateway can communicate with instances outside of the virtual network that it's in. Application Gateway can also communicate with instances outside of the subscription it's in. If you plan to use internal IPs as backend pool members, use [virtual network peering](../virtual-network/virtual-network-peering-overview.md) or [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md).
+
+### Can I deploy anything else in the application gateway subnet?
+
+No. But you can deploy other application gateways in the subnet.
+
+### Are network security groups supported on the application gateway subnet?
+
+See [Network security groups in the Application Gateway subnet](./configuration-infrastructure.md#network-security-groups).
+
+### Does the application gateway subnet support user-defined routes?
+
+See [User-defined routes supported in the Application Gateway subnet](./configuration-infrastructure.md#supported-user-defined-routes).
+
+### Are service endpoint policies supported in the Application Gateway subnet?
+
+No. [Service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) for storage accounts are not supported in Application Gateway subnet and configuring it will block Azure infrastructure traffic.
+
+### What are the limits on Application Gateway? Can I increase these limits?
+
+See [Application Gateway limits](../azure-resource-manager/management/azure-subscription-service-limits.md#application-gateway-limits).
+
+### Can I simultaneously use Application Gateway for both external and internal traffic?
+
+Yes. Application Gateway supports one internal IP and one external IP per application gateway.
+
+### Does Application Gateway support virtual network peering?
+
+Yes. Virtual network peering helps load-balance traffic in other virtual networks.
+
+### Can I talk to on-premises servers when they're connected by ExpressRoute or VPN tunnels?
+
+Yes, as long as traffic is allowed.
+
+### Can one backend pool serve many applications on different ports?
+
+Microservice architecture is supported. To probe on different ports, you need to configure multiple HTTP settings.
+
+### Do custom probes support wildcards or regex on response data?
+
+No.
+
+### How are routing rules processed in Application Gateway?
+
+See [Order of processing rules](./configuration-request-routing-rules.md#order-of-processing-rules).
+
+### For custom probes, what does the Host field signify?
+
+The Host field specifies the name to send the probe to when you've configured multisite on Application Gateway. Otherwise use '127.0.0.1'. This value is different from the virtual machine host name. Its format is \<protocol\>://\<host\>:\<port\>\<path\>.
+
+### Can I allow Application Gateway access to only a few source IP addresses?
+
+Yes. See [restrict access to specific source IPs](./configuration-infrastructure.md#allow-access-to-a-few-source-ips).
+
+### Can I use the same port for both public-facing and private-facing listeners?
+
+No.
+
+### Does Application Gateway support IPv6?
+
+Application Gateway v2 does not currently support IPv6. It can operate in a dual stack VNet using only IPv4, but the gateway subnet must be IPv4-only. Application Gateway v1 does not support dual stack VNets.
+
+### How do I use Application Gateway V2 with only private frontend IP address?
+
+Application Gateway V2 currently does not support only private IP mode. It supports the following combinations
+* Private IP and Public IP
+* Public IP only
+
+But if you'd like to use Application Gateway V2 with only private IP, you can follow the process below:
+1. Create an Application Gateway with both public and private frontend IP address
+2. Do not create any listeners for the public frontend IP address. Application Gateway will not listen to any traffic on the public IP address if no listeners are created for it.
+3. Create and attach a [Network Security Group](../virtual-network/network-security-groups-overview.md) for the Application Gateway subnet with the following configuration in the order of priority:
+
+ a. Allow traffic from Source as **GatewayManager** service tag and Destination as **Any** and Destination port as **65200-65535**. This port range is required for Azure infrastructure communication. These ports are protected (locked down) by certificate authentication. External entities, including the Gateway user administrators, can't initiate changes on those endpoints without appropriate certificates in place
+
+ b. Allow traffic from Source as **AzureLoadBalancer** service tag and Destination and destination port as **Any**
+
+ c. Deny all inbound traffic from Source as **Internet** service tag and Destination and destination port as **Any**. Give this rule the *least priority* in the inbound rules
+
+ d. Keep the default rules like allowing VirtualNetwork inbound so that the access on private IP address is not blocked
+
+ e. Outbound internet connectivity can't be blocked. Otherwise, you will face issues with logging, metrics, etc.
+
+Sample NSG configuration for private IP only access:
+![Application Gateway V2 NSG Configuration for private IP access only](./media/application-gateway-faq/appgw-privip-nsg.png)
+
+## Configuration - TLS
+
+### What certificates does Application Gateway support?
+
+Application Gateway supports self-signed certificates, certificate authority (CA) certificates, Extended Validation (EV) certificates, multi-domain (SAN) certificates, and wildcard certificates.
+
+### What cipher suites does Application Gateway support?
+
+Application Gateway supports the following cipher suites.
+
+- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
+- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
+- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
+- TLS_DHE_RSA_WITH_AES_256_CBC_SHA
+- TLS_DHE_RSA_WITH_AES_128_CBC_SHA
+- TLS_RSA_WITH_AES_256_GCM_SHA384
+- TLS_RSA_WITH_AES_128_GCM_SHA256
+- TLS_RSA_WITH_AES_256_CBC_SHA256
+- TLS_RSA_WITH_AES_128_CBC_SHA256
+- TLS_RSA_WITH_AES_256_CBC_SHA
+- TLS_RSA_WITH_AES_128_CBC_SHA
+- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
+- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
+- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
+- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
+- TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
+- TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
+- TLS_DHE_DSS_WITH_AES_256_CBC_SHA
+- TLS_DHE_DSS_WITH_AES_128_CBC_SHA
+- TLS_RSA_WITH_3DES_EDE_CBC_SHA
+- TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
+
+For information on how to customize TLS options, see [Configure TLS policy versions and cipher suites on Application Gateway](application-gateway-configure-ssl-policy-powershell.md).
+
+### Does Application Gateway support reencryption of traffic to the backend?
+
+Yes. Application Gateway supports TLS offload and end-to-end TLS, which reencrypt traffic to the backend.
+
+### Can I configure TLS policy to control TLS protocol versions?
+
+Yes. You can configure Application Gateway to deny TLS1.0, TLS1.1, and TLS1.2. By default, SSL 2.0 and 3.0 are already disabled and aren't configurable.
+
+### Can I configure cipher suites and policy order?
+
+Yes. In Application Gateway, you can [configure cipher suites](application-gateway-ssl-policy-overview.md). To define a custom policy, enable at least one of the following cipher suites.
+
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+* TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_RSA_WITH_AES_128_GCM_SHA256
+* TLS_RSA_WITH_AES_256_CBC_SHA256
+* TLS_RSA_WITH_AES_128_CBC_SHA256
+
+Application Gateway uses SHA256 to for backend management.
+
+### How many TLS/SSL certificates does Application Gateway support?
+
+Application Gateway supports up to 100 TLS/SSL certificates.
+
+### How many authentication certificates for backend reencryption does Application Gateway support?
+
+Application Gateway supports up to 100 authentication certificates.
+
+### Does Application Gateway natively integrate with Azure Key Vault?
+
+Yes, the Application Gateway v2 SKU supports Key Vault. For more information, see [TLS termination with Key Vault certificates](key-vault-certs.md).
+
+### How do I configure HTTPS listeners for .com and .net sites?
+
+For multiple domain-based (host-based) routing, you can create multisite listeners, set up listeners that use HTTPS as the protocol, and associate the listeners with the routing rules. For more information, see [Hosting multiple sites by using Application Gateway](./multiple-site-overview.md).
+
+### Can I use special characters in my .pfx file password?
+
+No, use only alphanumeric characters in your .pfx file password.
+
+### My EV certificate is issued by DigiCert and my intermediate certificate has been revoked. How do I renew my certificate on Application Gateway?
+
+Certificate Authority (CA) Browser members recently published reports detailing multiple certificates issued by CA vendors that are used by our customers, Microsoft, and the greater technology community that were out of compliance with industry standards for publicly trusted CAs. The reports regarding the non-compliant CAs can be found here: 
+
+* [Bug 1649951](https://bugzilla.mozilla.org/show_bug.cgi?id=1649951)
+* [Bug 1650910](https://bugzilla.mozilla.org/show_bug.cgi?id=1650910)
+
+As per the industry’s compliance requirements, CA vendors began revoking non-compliant CAs and issuing compliant CAs which requires customers to have their certificates re-issued. Microsoft is partnering closely with these vendors to minimize the potential impact to Azure Services, **however your self-issued certificates or certificates used in “Bring Your Own Certificate” (BYOC) scenarios are still at risk of being unexpectedly revoked**.
+
+To check if certificates utilized by your application have been revoked reference [DigiCert’s Announcement](https://knowledge.digicert.com/alerts/DigiCert-ICA-Replacement) and the [Certificate Revocation Tracker](https://misissued.com/#revoked). If your certificates have been revoked, or will be revoked, you will need to request new certificates from the CA vendor utilized in your applications. To avoid your application’s availability being interrupted due to certificates being unexpectedly revoked, or to update a certificate which has been revoked, please refer to our Azure updates post for remediation links of various Azure services that support BYOC: https://azure.microsoft.com/updates/certificateauthorityrevocation/
+
+For Application Gateway specific information, see below -
+
+If you are using a certificate issued by one of the revoked ICAs, your applicationΓÇÖs availability might be interrupted and depending on your application, you may receive a variety of error messages including but not limited to:
+
+1. Invalid certificate/revoked certificate
+2. Connection timed out
+3. HTTP 502
+
+To avoid any interruption to your application due to this issue, or to re-issue a CA which has been revoked, you need to take the following actions:
+
+1. Contact your certificate provider on how to re-issue your certificates
+2. Once reissued, update your certificates on the Azure Application Gateway/WAF with the complete [chain of trust](/windows/win32/seccrypto/certificate-chains) (leaf, intermediate, root certificate). Based on where you are using your certificate, either on the listener or the HTTP settings of the Application Gateway, follow the steps below to update the certificates and check the documentation links mentioned for more information.
+3. Update your backend application servers to use the re-issued certificate. Depending on the backend server that you are using, your certificate update steps may vary. Please check for the documentation from your vendor.
+
+To update the certificate in your listener:
+
+1. In the [Azure portal](https://portal.azure.com/), open your Application Gateway resource
+2. Open the listener settings thatΓÇÖs associated with your certificate
+3. Click ΓÇ£Renew or edit selected certificateΓÇ¥
+4. Upload your new PFX certificate with the password and click Save
+5. Access the website and verify if the site is working as expected
+For more information, check documentation [here](./renew-certificates.md).
+
+If you are referencing certificates from Azure KeyVault in your Application Gateway listener, we recommend the following the steps for a quick change ΓÇô
+
+1. In the [Azure portal](https://portal.azure.com/), navigate to your Azure KeyVault settings which has been associated with the Application Gateway
+2. Add/import the reissued certificate in your store. See documentation [here](../key-vault/certificates/quick-create-portal.md) for more information on how-to.
+3. Once the certificate has been imported, navigate to your Application Gateway listener settings and under ΓÇ£Choose a certificate from Key VaultΓÇ¥, click on the ΓÇ£CertificateΓÇ¥ drop down and choose the recently added certificate
+4. Click Save
+For more information on TLS termination on Application Gateway with Key Vault certificates, check documentation [here](./key-vault-certs.md).
++
+To update the certificate in your HTTP Settings:
+
+If you are using V1 SKU of the Application Gateway/WAF service, then you would have to upload the new certificate as your backend authentication certificate.
+1. In the [Azure portal](https://portal.azure.com/), open your Application Gateway resource
+2. Open the HTTP settings thatΓÇÖs associated with your certificate
+3. Click on ΓÇ£Add certificateΓÇ¥ and upload the reissued certificate and click save
+4. You can remove the old certificate later by clicking on the “…” options button next to the old certificate and select delete and click save.
+For more information, check documentation [here](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-back-end-servers).
+
+If you are using the V2 SKU of the Application Gateway/WAF service, you donΓÇÖt have to upload the new certificate in the HTTP settings since V2 SKU uses ΓÇ£trusted root certificatesΓÇ¥ and no action needs to be taken here.
+
+## Configuration - ingress controller for AKS
+
+### What is an Ingress Controller?
+
+Kubernetes allows creation of `deployment` and `service` resource to expose a group of pods internally in the cluster. To expose the same service externally, an [`Ingress`](https://kubernetes.io/docs/concepts/services-networking/ingress/) resource is defined which provides load balancing, TLS termination and name-based virtual hosting.
+To satisfy this `Ingress` resource, an Ingress Controller is required which listens for any changes to `Ingress` resources and configures the load balancer policies.
+
+The Application Gateway Ingress Controller (AGIC) allows [Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) to be used as the ingress for an [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/) also known as an AKS cluster.
+
+### Can a single ingress controller instance manage multiple Application Gateways?
+
+Currently, one instance of Ingress Controller can only be associated to one Application Gateway.
+
+### Why is my AKS cluster with kubenet not working with AGIC?
+
+AGIC tries to automatically associate the route table resource to the Application Gateway subnet but may fail to do so due to lack of permissions from the AGIC. If AGIC is unable to associate the route table to the Application Gateway subnet, there will be an error in the AGIC logs saying so, in which case you'll have to manually associate the route table created by the AKS cluster to the Application Gateway's subnet. For more information, see [Supported user-defined routes](configuration-infrastructure.md#supported-user-defined-routes).
+
+### Can I connect my AKS cluster and Application Gateway in separate virtual networks?
+
+Yes, as long as the virtual networks are peered and they don't have overlapping address spaces. If you're running AKS with kubenet, then be sure to associate the route table generated by AKS to the Application Gateway subnet.
+
+### What features are not supported on the AGIC add-on?
+
+Please see the differences between AGIC deployed through Helm versus deployed as an AKS add-on [here](ingress-controller-overview.md#difference-between-helm-deployment-and-aks-add-on)
+
+### When should I use the add-on versus the Helm deployment?
+
+Please see the differences between AGIC deployed through Helm versus deployed as an AKS add-on [here](ingress-controller-overview.md#difference-between-helm-deployment-and-aks-add-on), especially the tables documenting which scenario(s) are supported by AGIC deployed through Helm as opposed to an AKS add-on. In general, deploying through Helm will allow you to test out beta features and release candidates before an official release.
+
+### Can I control which version of AGIC will be deployed with the add-on?
+
+No, AGIC add-on is a managed service which means Microsoft will automatically update the add-on to the latest stable version.
+
+## Diagnostics and logging
+
+### What types of logs does Application Gateway provide?
+
+Application Gateway provides three logs:
+
+* **ApplicationGatewayAccessLog**: The access log contains each request submitted to the application gateway frontend. The data includes the caller's IP, URL requested, response latency, return code, and bytes in and out. It contains one record per application gateway.
+* **ApplicationGatewayPerformanceLog**: The performance log captures performance information for each application gateway. Information includes the throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count.
+* **ApplicationGatewayFirewallLog**: For application gateways that you configure with WAF, the firewall log contains requests that are logged through either detection mode or prevention mode.
+
+All logs are collected every 60 seconds. For more information, see [Backend health, diagnostics logs, and metrics for Application Gateway](application-gateway-diagnostics.md).
+
+### How do I know if my backend pool members are healthy?
+
+Verify health by using the PowerShell cmdlet `Get-AzApplicationGatewayBackendHealth` or the portal. For more information, see [Application Gateway diagnostics](application-gateway-diagnostics.md).
+
+### What's the retention policy for the diagnostic logs?
+
+Diagnostic logs flow to the customer's storage account. Customers can set the retention policy based on their preference. Diagnostic logs can also be sent to an event hub or Azure Monitor logs. For more information, see [Application Gateway diagnostics](application-gateway-diagnostics.md).
+
+### How do I get audit logs for Application Gateway?
+
+In the portal, on the menu blade of an application gateway, select **Activity Log** to access the audit log.
+
+### Can I set alerts with Application Gateway?
+
+Yes. In Application Gateway, alerts are configured on metrics. For more information, see [Application Gateway metrics](./application-gateway-metrics.md) and [Receive alert notifications](../azure-monitor/platform/alerts-overview.md).
+
+### How do I analyze traffic statistics for Application Gateway?
+
+You can view and analyze access logs in several ways. Use Azure Monitor logs, Excel, Power BI, and so on.
+
+You can also use a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway access logs. GoAccess provides valuable HTTP traffic statistics such as unique visitors, requested files, hosts, operating systems, browsers, and HTTP status codes. For more information, in GitHub, see the [Readme file in the Resource Manager template folder](https://aka.ms/appgwgoaccessreadme).
+
+### What could cause backend health to return an unknown status?
+
+Usually, you see an unknown status when access to the backend is blocked by a network security group (NSG), custom DNS, or user-defined routing (UDR) on the application gateway subnet. For more information, see [Backend health, diagnostics logging, and metrics for Application Gateway](application-gateway-diagnostics.md).
+
+### Are NSG flow logs supported on NSGs associated to Application Gateway v2 subnet?
+
+Due to current platform limitations, if you have an NSG on the Application Gateway v2 (Standard_v2, WAF_v2) subnet and if you have enabled NSG flow logs on it, you will see nondeterministic behavior and this scenario is currently not supported.
+
+### Does Application Gateway store customer data?
+
+No, Application Gateway does not store customer data.
+
+## Next steps
+
+To learn more about Application Gateway, see [What is Azure Application Gateway?](overview.md).
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/configuration-front-end-ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configuration-front-end-ip.md
@@ -20,7 +20,7 @@ Application Gateway V2 currently does not support only private IP mode. It suppo
* Private IP address and public IP address * Public IP address only
-For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.md#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address).
+For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address).
A public IP address isn't required for an internal endpoint that's not exposed to the Internet. That's known as an *internal load-balancer* (ILB) endpoint or private frontend IP. An application gateway ILB is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers in a multi-tier application within a security boundary that aren't exposed to the Internet but that require round-robin load distribution, session stickiness, or TLS termination.
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/security-baseline.md
@@ -51,7 +51,7 @@ Note: There are some cases where NSG flow logs associated with your Azure Applic
* [Understand Network Security provided by Azure Security Center](../security-center/security-center-network-recommendations.md)
-* [FAQ for diagnostic and Logging for Azure Application Gateway](./application-gateway-faq.md#diagnostics-and-logging)
+* [FAQ for diagnostic and Logging for Azure Application Gateway](./application-gateway-faq.yml#what-types-of-logs-does-application-gateway-provide)
**Azure Security Center monitoring**: Yes
@@ -98,7 +98,7 @@ Note: There are some cases where NSG flow logs associated with your Azure Applic
* [Understand Network Security provided by Azure Security Center](../security-center/security-center-network-recommendations.md)
-* [FAQ for diagnostic and Logging for Azure Application Gateway](./application-gateway-faq.md#diagnostics-and-logging)
+* [FAQ for diagnostic and Logging for Azure Application Gateway](./application-gateway-faq.yml#what-types-of-logs-does-application-gateway-provide)
**Azure Security Center monitoring**: Currently not available
automation https://docs.microsoft.com/en-us/azure/automation/automation-managing-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-managing-data.md
@@ -3,7 +3,7 @@ title: Azure Automation data security
description: This article helps you learn how Azure Automation protects your privacy and secures your data. services: automation ms.subservice: shared-capabilities
-ms.date: 07/20/2020
+ms.date: 01/08/2021
ms.topic: conceptual --- # Management of Azure Automation data
@@ -20,11 +20,9 @@ To insure the security of data in transit to Azure Automation, we strongly encou
* DSC nodes
-Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**. Starting in September 2020, we begin enforcing TLS 1.2 and later versions of the encryption protocol.
+Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**. We do not recommend explicitly setting your agent to only use TLS 1.2 unless absolutely necessary, as it can break platform level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3.
-We do not recommend explicitly setting your agent to only use TLS 1.2 unless absolutely necessary, as it can break platform level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3.
-
-For information about TLS 1.2 support with the Log Analytics agent for Windows and Linux, which is a dependency for the Hybrid Runbook Worker role, see [Log Analytics agent overview - TLS 1.2](..//azure-monitor/platform/log-analytics-agent.md#tls-12-protocol).
+For information about TLS 1.2 support with the Log Analytics agent for Windows and Linux, which is a dependency for the Hybrid Runbook Worker role, see [Log Analytics agent overview - TLS 1.2](..//azure-monitor/platform/log-analytics-agent.md#tls-12-protocol).
### Platform-specific guidance
@@ -45,7 +43,7 @@ The following table summarizes the retention policy for different resources.
|:--- |:--- | | Accounts |An account is permanently removed 30 days after a user deletes it. | | Assets |An asset is permanently removed 30 days after a user deletes it, or 30 days after a user deletes an account that holds the asset. Assets include variables, schedules, credentials, certificates, Python 2 packages, and connections. |
-| DSC Nodes |A DSC node is permanently removed 30 days after being unregistered from an Automation account using Azure portal or the [Unregister-AzAutomationDscNode](/powershell/module/az.automation/unregister-azautomationdscnode?view=azps-3.7.0) cmdlet in Windows PowerShell. A node is also permanently removed 30 days after a user deletes the account that holds the node. |
+| DSC Nodes |A DSC node is permanently removed 30 days after being unregistered from an Automation account using Azure portal or the [Unregister-AzAutomationDscNode](/powershell/module/az.automation/unregister-azautomationdscnode) cmdlet in Windows PowerShell. A node is also permanently removed 30 days after a user deletes the account that holds the node. |
| Jobs |A job is deleted and permanently removed 30 days after modification, for example, after the job completes, is stopped, or is suspended. | | Modules |A module is permanently removed 30 days after a user deletes it, or 30 days after a user deletes the account that holds the module. | | Node Configurations/MOF Files |An old node configuration is permanently removed 30 days after a new node configuration is generated. |
@@ -74,8 +72,7 @@ You can't retrieve the values for encrypted variables or the password fields of
### DSC configurations
-You can export your DSC configurations to script files using either the Azure portal or the
-[Export-AzAutomationDscConfiguration](/powershell/module/az.automation/export-azautomationdscconfiguration?view=azps-3.7.0) cmdlet in Windows PowerShell. You can import and use these configurations in another Automation account.
+You can export your DSC configurations to script files using either the Azure portal or the [Export-AzAutomationDscConfiguration](/powershell/module/az.automation/export-azautomationdscconfiguration) cmdlet in Windows PowerShell. You can import and use these configurations in another Automation account.
## Geo-replication in Azure Automation
automation https://docs.microsoft.com/en-us/azure/automation/automation-runbook-gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-gallery.md
@@ -3,7 +3,7 @@ title: Use Azure Automation runbooks and modules in PowerShell Gallery
description: This article tells how to use runbooks and modules from Microsoft and the community in PowerShell Gallery. services: automation ms.subservice: process-automation
-ms.date: 12/17/2020
+ms.date: 01/08/2021
ms.topic: conceptual --- # Use runbooks and modules in PowerShell Gallery
@@ -11,7 +11,7 @@ ms.topic: conceptual
Rather than creating your own runbooks and modules in Azure Automation, you can access scenarios that have already been built by Microsoft and the community. You can get PowerShell runbooks and [modules](#modules-in-powershell-gallery) from the PowerShell Gallery and [Python runbooks](#use-python-runbooks) from the Azure Automation GitHub organization. You can also contribute to the community by sharing [scenarios that you develop](#add-a-powershell-runbook-to-the-gallery). > [!NOTE]
-> The TechNet Script Center is retiring. All of the runbooks from Script Center in the Runbook gallery have been moved to our [Automation GitHub organization](https://github.com/azureautomation).
+> The TechNet Script Center is retiring. All of the runbooks from Script Center in the Runbook gallery have been moved to our [Automation GitHub organization](https://github.com/azureautomation) For more information, see [here](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-automation-runbooks-moving-to-github/ba-p/2039337).
## Runbooks in PowerShell Gallery
automation https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-types.md
@@ -3,7 +3,7 @@ title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. services: automation ms.subservice: process-automation
-ms.date: 12/22/2020
+ms.date: 01/08/2021
ms.topic: conceptual ---
@@ -112,7 +112,6 @@ Python runbooks compile under Python 2 and Python 3. Python 3 runbooks are curre
* To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account. * Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3 runbook (preview) does not work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation.  * Python 3 runbooks (preview) and packages do not work with PowerShell.
-* Using a webhook to start a Python runbook is not supported.
* Azure Automation does not supportΓÇ»**sys.stderr**. ### Known issues
automation https://docs.microsoft.com/en-us/azure/automation/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
automation https://docs.microsoft.com/en-us/azure/automation/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: mgoedtel ms.author: magoedte
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: AlexandraKemperMS ms.author: alkemper
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md
@@ -14,7 +14,7 @@ ms.author: alkemper
--- # Quickstart: Add feature flags to an Azure Functions app
-In this quickstart, you create an implementation of feature management in an Azure Functions app using Azure App Configuration. You will use the App Configuration service to centrally store all your feature flags and control their states.
+In this quickstart, you create an Azure Functions app and use feature flags in it. You use the feature management from Azure App Configuration to centrally store all your feature flags and control their states.
The .NET Feature Management libraries extend the framework with feature flag support. These libraries are built on top of the .NET configuration system. They integrate with App Configuration through its .NET configuration provider.
@@ -43,66 +43,113 @@ The .NET Feature Management libraries extend the framework with feature flag sup
## Connect to an App Configuration store
-1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search and add the following NuGet packages to your project. Verify for `Microsoft.Extensions.DependencyInjection` that you are on the most recent stable build.
-
- ```
- Microsoft.Extensions.DependencyInjection
- Microsoft.Extensions.Configuration
- Microsoft.FeatureManagement
- ```
+This project will use [dependency injection in .NET Azure Functions](/azure/azure-functions/functions-dotnet-dependency-injection). It adds Azure App Configuration as an extra configuration source where your feature flags are stored.
+1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search for and add following NuGet packages to your project.
+ - [Microsoft.Extensions.Configuration.AzureAppConfiguration](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.AzureAppConfiguration/) version 4.1.0 or later
+ - [Microsoft.FeatureManagement](https://www.nuget.org/packages/Microsoft.FeatureManagement/) version 2.2.0 or later
+ - [Microsoft.Azure.Functions.Extensions](https://www.nuget.org/packages/Microsoft.Azure.Functions.Extensions/) version 1.1.0 or later
-1. Open *Function1.cs*, and add the namespaces of these packages.
+2. Add a new file, *Startup.cs*, with the following code. It defines a class named `Startup` that implements the `FunctionsStartup` abstract class. An assembly attribute is used to specify the type name used during Azure Functions startup.
```csharp
+ using System;
+ using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.Configuration; using Microsoft.FeatureManagement;
- using Microsoft.Extensions.DependencyInjection;
+
+ [assembly: FunctionsStartup(typeof(FunctionApp.Startup))]
+
+ namespace FunctionApp
+ {
+ class Startup : FunctionsStartup
+ {
+ public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)
+ {
+ }
+
+ public override void Configure(IFunctionsHostBuilder builder)
+ {
+ }
+ }
+ }
```
-1. Add the `Function1` static constructor below to bootstrap the Azure App Configuration provider. Next add two `static` members, a field named `ServiceProvider` to create a singleton instance of `ServiceProvider`, and a property below `Function1` named `FeatureManager` to create a singleton instance of `IFeatureManager`. Then connect to App Configuration in `Function1` by calling `AddAzureAppConfiguration()`. This process will load the configuration at application startup. The same configuration instance will be used for all Functions calls later.
- ```csharp
- // Implements IDisposable, cached for life time of function
- private static ServiceProvider ServiceProvider;
+3. Update the `ConfigureAppConfiguration` method, and add Azure App Configuration provider as an extra configuration source by calling `AddAzureAppConfiguration()`.
- static Function1()
+ The `UseFeatureFlags()` method tells the provider to load feature flags. All feature flags have a default cache expiration of 30 seconds before rechecking for changes. The expiration interval can be updated by setting the `FeatureFlagsOptions.CacheExpirationInterval` property passed to the `UseFeatureFlags` method.
+
+ ```csharp
+ public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)
+ {
+ builder.ConfigurationBuilder.AddAzureAppConfiguration(options =>
{
- IConfigurationRoot configuration = new ConfigurationBuilder()
- .AddAzureAppConfiguration(options =>
- {
- options.Connect(Environment.GetEnvironmentVariable("ConnectionString"))
- .UseFeatureFlags();
- }).Build();
+ options.Connect(Environment.GetEnvironmentVariable("ConnectionString"))
+ .Select("_")
+ .UseFeatureFlags();
+ });
+ }
+ ```
+ > [!TIP]
+ > If you don't want any configuration other than feature flags to be loaded to your application, you can call `Select("_")` to only load a nonexisting dummy key "_". By default, all configuration key-values in your App Configuration store will be loaded if no `Select` method is called.
- var services = new ServiceCollection();
- services.AddSingleton<IConfiguration>(configuration).AddFeatureManagement();
+4. Update the `Configure` method to make Azure App Configuration services and feature manager available through dependency injection.
- ServiceProvider = services.BuildServiceProvider();
- }
+ ```csharp
+ public override void Configure(IFunctionsHostBuilder builder)
+ {
+ builder.Services.AddAzureAppConfiguration();
+ builder.Services.AddFeatureManagement();
+ }
+ ```
+
+5. Open *Function1.cs*, and add the following namespaces.
- private static IFeatureManager FeatureManager => ServiceProvider.GetRequiredService<IFeatureManager>();
+ ```csharp
+ using System.Linq;
+ using Microsoft.FeatureManagement;
+ using Microsoft.Extensions.Configuration.AzureAppConfiguration;
```
-1. Update the `Run` method to change value of the displayed message depending on the state of the feature flag.
+ Add a constructor used to obtain instances of `_featureManagerSnapshot` and `IConfigurationRefresherProvider` through dependency injection. From the `IConfigurationRefresherProvider`, you can obtain the instance of `IConfigurationRefresher`.
```csharp
- [FunctionName("Function1")]
- public static async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
- ILogger log)
- {
- string message = await FeatureManager.IsEnabledAsync("Beta")
- ? "The Feature Flag 'Beta' is turned ON"
- : "The Feature Flag 'Beta' is turned OFF";
-
- return (ActionResult)new OkObjectResult(message);
- }
+ private readonly IFeatureManagerSnapshot _featureManagerSnapshot;
+ private readonly IConfigurationRefresher _configurationRefresher;
+
+ public Function1(IFeatureManagerSnapshot featureManagerSnapshot, IConfigurationRefresherProvider refresherProvider)
+ {
+ _featureManagerSnapshot = featureManagerSnapshot;
+ _configurationRefresher = refresherProvider.Refreshers.First();
+ }
+ ```
+
+6. Update the `Run` method to change the value of the displayed message depending on the state of the feature flag.
+
+ The `TryRefreshAsync` method is called at the beginning of the Functions call to refresh feature flags. It will be a no-op if the cache expiration time window isn't reached. Remove the `await` operator if you prefer the feature flags to be refreshed without blocking the current Functions call. In that case, later Functions calls will get updated value.
+
+ ```csharp
+ [FunctionName("Function1")]
+ public async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+ ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ await _configurationRefresher.TryRefreshAsync();
+
+ string message = await _featureManagerSnapshot.IsEnabledAsync("Beta")
+ ? "The Feature Flag 'Beta' is turned ON"
+ : "The Feature Flag 'Beta' is turned OFF";
+
+ return (ActionResult)new OkObjectResult(message);
+ }
``` ## Test the function locally
-1. Set an environment variable named **ConnectionString**, where the value is the access key you retrieved earlier in your App Configuration store under **Access Keys**. If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
+1. Set an environment variable named **ConnectionString**, where the value is the connection string you retrieved earlier in your App Configuration store under **Access Keys**. If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
```cmd setx ConnectionString "connection-string-of-your-app-configuration-store"
@@ -130,24 +177,27 @@ The .NET Feature Management libraries extend the framework with feature flag sup
![Quickstart Function feature flag disabled](./media/quickstarts/functions-launch-ff-disabled.png)
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store instance that you created.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store that you created.
-1. Select **Feature Manager**, and change the state of the **Beta** key to **On**.
+1. Select **Feature manager**, and change the state of the **Beta** key to **On**.
-1. Return to your command prompt and cancel the running process by pressing `Ctrl-C`. Restart your application by pressing F5.
-
-1. Copy the URL of your function from the Azure Functions runtime output using the same process as in Step 3. Paste the URL for the HTTP request into your browser's address bar. The browser response should have changed to indicate the feature flag `Beta` is turned on, as shown in the image below.
+1. Refresh the browser a few times. When the cached feature flag expires after 30 seconds, the page should have changed to indicate the feature flag `Beta` is turned on, as shown in the image below.
![Quickstart Function feature flag enabled](./media/quickstarts/functions-launch-ff-enabled.png)
+> [!NOTE]
+> The example code used in this tutorial can be downloaded from the [Azure App Configuration GitHub repo](https://github.com/Azure/AppConfiguration/tree/master/examples/DotNetCore/AzureFunction).
+ ## Clean up resources [!INCLUDE [azure-app-configuration-cleanup](../../includes/azure-app-configuration-cleanup.md)] ## Next steps
-In this quickstart, you created a feature flag and used it with an Azure Functions app via the [App Configuration provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration).
+In this quickstart, you created a feature flag and used it with an Azure Functions app via the [Microsoft.FeatureManagement](/dotnet/api/microsoft.featuremanagement) library.
-- Learn more about [feature management](./concept-feature-management.md).-- [Manage feature flags](./manage-feature-flags.md).
+- Learn more about [feature management](./concept-feature-management.md)
+- [Manage feature flags](./manage-feature-flags.md)
+- [Use conditional feature flags](./howto-feature-filters-aspnet-core.md)
+- [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md)
- [Use dynamic configuration in an Azure Functions app](./enable-dynamic-configuration-azure-functions-csharp.md)\ No newline at end of file
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/connect-cluster.md
@@ -26,7 +26,7 @@ Verify you have the following requirements ready:
* You'll need a kubeconfig file to access the cluster and cluster-admin role on the cluster for deployment of Arc enabled Kubernetes agents. * The user or service principal used with `az login` and `az connectedk8s connect` commands must have the 'Read' and 'Write' permissions on the 'Microsoft.Kubernetes/connectedclusters' resource type. The "Kubernetes Cluster - Azure Arc Onboarding" role has these permissions and can be used for role assignments on the user or service principal. * Helm 3 is required for the onboarding the cluster using connectedk8s extension. [Install the latest release of Helm 3](https://helm.sh/docs/intro/install) to meet this requirement.
-* Azure CLI version 2.3+ is required for installing the Azure Arc enabled Kubernetes CLI extensions. [Install Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest&preserve-view=true) or update to the latest version to ensure that you have Azure CLI version 2.3+.
+* Azure CLI version 2.15+ is required for installing the Azure Arc enabled Kubernetes CLI extensions. [Install Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest&preserve-view=true) or update to the latest version to ensure that you have Azure CLI version 2.15+.
* Install the Arc enabled Kubernetes CLI extensions: Install the `connectedk8s` extension, which helps you connect Kubernetes clusters to Azure:
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.service: azure-arc #ms.subservice: azure-arc-kubernetes coming soon author: mlearned
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/agent-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
@@ -1,7 +1,7 @@
--- title: Overview of the Connected Machine Windows agent description: This article provides a detailed overview of the Azure Arc enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments.
-ms.date: 12/21/2020
+ms.date: 01/08/2021
ms.topic: conceptual ---
@@ -64,6 +64,8 @@ The following versions of the Windows and Linux operating system are officially
Before configuring your machines with Azure Arc enabled servers, review the Azure Resource Manager [subscription limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits) and [resource group limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits) to plan for the number of machines to be connected.
+Azure Arc enabled servers supports up to 5,000 machine instances in a resource group.
+ ### Transport Layer Security 1.2 protocol To ensure the security of data in transit to Azure, we strongly encourage you to configure machine to use Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**.
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample ms.custom: subject-policy-compliancecontrols ---
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: yegu-ms ms.author: yegu
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: yegu-ms ms.author: yegu
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-table-input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-table-input.md
@@ -292,6 +292,75 @@ public class Person : TableEntity
} ```
+# [Java](#tab/java)
+
+The following example shows an HTTP triggered function which returns a list of person objects who are in a specified partition in Table storage. In the example, the partition key is extracted from the http route, and the tableName and connection are from the function settings.
+
+```java
+public class Person {
+ private String PartitionKey;
+ private String RowKey;
+ private String Name;
+
+ public String getPartitionKey() { return this.PartitionKey; }
+ public void setPartitionKey(String key) { this.PartitionKey = key; }
+ public String getRowKey() { return this.RowKey; }
+ public void setRowKey(String key) { this.RowKey = key; }
+ public String getName() { return this.Name; }
+ public void setName(String name) { this.Name = name; }
+}
+
+@FunctionName("getPersonsByPartitionKey")
+public Person[] get(
+ @HttpTrigger(name = "getPersons", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="persons/{partitionKey}") HttpRequestMessage<Optional<String>> request,
+ @BindingName("partitionKey") String partitionKey,
+ @TableInput(name="persons", partitionKey="{partitionKey}", tableName="%MyTableName%", connection="MyConnectionString") Person[] persons,
+ final ExecutionContext context) {
+
+ context.getLogger().info("Got query for person related to persons with partition key: " + partitionKey);
+
+ return persons;
+}
+```
+
+The TableInput annotation can also extract the bindings from the json body of the request, like the following example shows.
+
+```java
+@FunctionName("GetPersonsByKeysFromRequest")
+public HttpResponseMessage get(
+ @HttpTrigger(name = "getPerson", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="query") HttpRequestMessage<Optional<String>> request,
+ @TableInput(name="persons", partitionKey="{partitionKey}", rowKey = "{rowKey}", tableName="%MyTableName%", connection="MyConnectionString") Person person,
+ final ExecutionContext context) {
+
+ if (person == null) {
+ return request.createResponseBuilder(HttpStatus.NOT_FOUND)
+ .body("Person not found.")
+ .build();
+ }
+
+ return request.createResponseBuilder(HttpStatus.OK)
+ .header("Content-Type", "application/json")
+ .body(person)
+ .build();
+}
+```
+
+The following example uses the Filter to query for persons with a specific name in an Azure Table, and limits the number of possible matches to 10 results.
+
+```java
+@FunctionName("getPersonsByName")
+public Person[] get(
+ @HttpTrigger(name = "getPersons", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="filter/{name}") HttpRequestMessage<Optional<String>> request,
+ @BindingName("name") String name,
+ @TableInput(name="persons", filter="Name eq '{name}'", take = "10", tableName="%MyTableName%", connection="MyConnectionString") Person[] persons,
+ final ExecutionContext context) {
+
+ context.getLogger().info("Got query for person related to persons with name: " + name);
+
+ return persons;
+}
+```
+ # [JavaScript](#tab/javascript) The following example shows a table input binding in a *function.json* file and [JavaScript code](functions-reference-node.md) that uses the binding. The function uses a queue trigger to read a single table row.
@@ -334,9 +403,53 @@ module.exports = function (context, myQueueItem) {
}; ```
+# [PowerShell](#tab/powershell)
+
+The following function uses a queue trigger to read a single table row as input to a function.
+
+In this example, the binding configuration specifies an explicit value for the table's `partitionKey` and uses an expression to pass to the `rowKey`. The `rowKey` expression, `{queueTrigger}`, indicates that the row key comes from the queue message string.
+
+Binding configuration in _function.json_:
+
+```json
+{
+  "bindings": [
+    {
+      "queueName": "myqueue-items",
+      "connection": "MyStorageConnectionAppSetting",
+      "name": "MyQueueItem",
+      "type": "queueTrigger",
+      "direction": "in"
+    },
+    {
+      "name": "PersonEntity",
+      "type": "table",
+      "tableName": "Person",
+      "partitionKey": "Test",
+      "rowKey": "{queueTrigger}",
+      "connection": "MyStorageConnectionAppSetting",
+      "direction": "in"
+    }
+  ],
+  "disabled": false
+}
+```
+
+PowerShell code in _run.ps1_:
+
+```powershell
+param($MyQueueItem,ΓÇ»$PersonEntity,ΓÇ»$TriggerMetadata)
+Write-Host "PowerShell queue trigger function processed work item: $MyQueueItem"
+Write-Host "Person entity name: $($PersonEntity.Name)"
+```
+ # [Python](#tab/python)
-Single table row
+The following function uses a queue trigger to read a single table row as input to a function.
+
+In this example, binding configuration specifies an explicit value for the table's `partitionKey` and uses an expression to pass to the `rowKey`. The `rowKey` expression, `{id}` indicates that the row key comes from the queue message string.
+
+Binding configuration in the _function.json_ file:
```json {
@@ -372,6 +485,8 @@ Single table row
} ```
+Python code in the *\_\_init\_\_.py* file:
+ ```python import json
@@ -383,75 +498,6 @@ def main(req: func.HttpRequest, messageJSON) -> func.HttpResponse:
return func.HttpResponse(f"Table row: {messageJSON}") ```
-# [Java](#tab/java)
-
-The following example shows an HTTP triggered function which returns a list of person objects who are in a specified partition in Table storage. In the example, the partition key is extracted from the http route, and the tableName and connection are from the function settings.
-
-```java
-public class Person {
- private String PartitionKey;
- private String RowKey;
- private String Name;
-
- public String getPartitionKey() { return this.PartitionKey; }
- public void setPartitionKey(String key) { this.PartitionKey = key; }
- public String getRowKey() { return this.RowKey; }
- public void setRowKey(String key) { this.RowKey = key; }
- public String getName() { return this.Name; }
- public void setName(String name) { this.Name = name; }
-}
-
-@FunctionName("getPersonsByPartitionKey")
-public Person[] get(
- @HttpTrigger(name = "getPersons", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="persons/{partitionKey}") HttpRequestMessage<Optional<String>> request,
- @BindingName("partitionKey") String partitionKey,
- @TableInput(name="persons", partitionKey="{partitionKey}", tableName="%MyTableName%", connection="MyConnectionString") Person[] persons,
- final ExecutionContext context) {
-
- context.getLogger().info("Got query for person related to persons with partition key: " + partitionKey);
-
- return persons;
-}
-```
-
-The TableInput annotation can also extract the bindings from the json body of the request, like the following example shows.
-
-```java
-@FunctionName("GetPersonsByKeysFromRequest")
-public HttpResponseMessage get(
- @HttpTrigger(name = "getPerson", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="query") HttpRequestMessage<Optional<String>> request,
- @TableInput(name="persons", partitionKey="{partitionKey}", rowKey = "{rowKey}", tableName="%MyTableName%", connection="MyConnectionString") Person person,
- final ExecutionContext context) {
-
- if (person == null) {
- return request.createResponseBuilder(HttpStatus.NOT_FOUND)
- .body("Person not found.")
- .build();
- }
-
- return request.createResponseBuilder(HttpStatus.OK)
- .header("Content-Type", "application/json")
- .body(person)
- .build();
-}
-```
-
-The following examples uses the Filter to query for persons with a specific name in an Azure Table, and limits the number of possible matches to 10 results.
-
-```java
-@FunctionName("getPersonsByName")
-public Person[] get(
- @HttpTrigger(name = "getPersons", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="filter/{name}") HttpRequestMessage<Optional<String>> request,
- @BindingName("name") String name,
- @TableInput(name="persons", filter="Name eq '{name}'", take = "10", tableName="%MyTableName%", connection="MyConnectionString") Person[] persons,
- final ExecutionContext context) {
-
- context.getLogger().info("Got query for person related to persons with name: " + name);
-
- return persons;
-}
-```
- --- ## Attributes and annotations
@@ -518,17 +564,21 @@ The storage account to use is determined in the following order:
Attributes are not supported by C# Script.
+# [Java](#tab/java)
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@TableInput` annotation on parameters whose value would come from Table storage. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+ # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Attributes are not supported by Python.
+Attributes are not supported by PowerShell.
-# [Java](#tab/java)
+# [Python](#tab/python)
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@TableInput` annotation on parameters whose value would come from Table storage. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`.
+Attributes are not supported by Python.
---
@@ -578,17 +628,21 @@ The following table explains the binding configuration properties that you set i
> [!NOTE] > `IQueryable` isn't supported in the [Functions v2 runtime](functions-versions.md). An alternative is to [use a CloudTable paramName method parameter](https://stackoverflow.com/questions/48922485/binding-to-table-storage-in-v2-azure-functions-using-cloudtable) to read the table by using the Azure Storage SDK. If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
+# [Java](#tab/java)
+
+The [TableInput](/java/api/com.microsoft.azure.functions.annotation.tableinput) attribute gives you access to the table row that triggered the function.
+ # [JavaScript](#tab/javascript) Set the `filter` and `take` properties. Don't set `partitionKey` or `rowKey`. Access the input table entity (or entities) using `context.bindings.<BINDING_NAME>`. The deserialized objects have `RowKey` and `PartitionKey` properties.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Table data is passed to the function as a JSON string. De-serialize the message by calling `json.loads` as shown in the input [example](#example).
+Data is passed to the input parameter as specified by the `name` key in the *function.json* file. Specifying The `partitionKey` and `rowKey` allows you to filter to specific records. See the [PowerShell example](#example) for more detail.
-# [Java](#tab/java)
+# [Python](#tab/python)
-The [TableInput](/java/api/com.microsoft.azure.functions.annotation.tableinput) attribute gives you access to the table row that triggered the function.
+Table data is passed to the function as a JSON string. De-serialize the message by calling `json.loads` as shown in the input [example](#example).
---
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-table-output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-table-output.md
@@ -96,6 +96,83 @@ public class Person
```
+# [Java](#tab/java)
+
+The following example shows a Java function that uses an HTTP trigger to write a single table row.
+
+```java
+public class Person {
+ private String PartitionKey;
+ private String RowKey;
+ private String Name;
+
+ public String getPartitionKey() {return this.PartitionKey;}
+ public void setPartitionKey(String key) {this.PartitionKey = key; }
+ public String getRowKey() {return this.RowKey;}
+ public void setRowKey(String key) {this.RowKey = key; }
+ public String getName() {return this.Name;}
+ public void setName(String name) {this.Name = name; }
+}
+
+public class AddPerson {
+
+ @FunctionName("addPerson")
+ public HttpResponseMessage get(
+ @HttpTrigger(name = "postPerson", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION, route="persons/{partitionKey}/{rowKey}") HttpRequestMessage<Optional<Person>> request,
+ @BindingName("partitionKey") String partitionKey,
+ @BindingName("rowKey") String rowKey,
+ @TableOutput(name="person", partitionKey="{partitionKey}", rowKey = "{rowKey}", tableName="%MyTableName%", connection="MyConnectionString") OutputBinding<Person> person,
+ final ExecutionContext context) {
+
+ Person outPerson = new Person();
+ outPerson.setPartitionKey(partitionKey);
+ outPerson.setRowKey(rowKey);
+ outPerson.setName(request.getBody().get().getName());
+
+ person.setValue(outPerson);
+
+ return request.createResponseBuilder(HttpStatus.OK)
+ .header("Content-Type", "application/json")
+ .body(outPerson)
+ .build();
+ }
+}
+```
+
+The following example shows a Java function that uses an HTTP trigger to write multiple table rows.
+
+```java
+public class Person {
+ private String PartitionKey;
+ private String RowKey;
+ private String Name;
+
+ public String getPartitionKey() {return this.PartitionKey;}
+ public void setPartitionKey(String key) {this.PartitionKey = key; }
+ public String getRowKey() {return this.RowKey;}
+ public void setRowKey(String key) {this.RowKey = key; }
+ public String getName() {return this.Name;}
+ public void setName(String name) {this.Name = name; }
+}
+
+public class AddPersons {
+
+ @FunctionName("addPersons")
+ public HttpResponseMessage get(
+ @HttpTrigger(name = "postPersons", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION, route="persons/") HttpRequestMessage<Optional<Person[]>> request,
+ @TableOutput(name="person", tableName="%MyTableName%", connection="MyConnectionString") OutputBinding<Person[]> persons,
+ final ExecutionContext context) {
+
+ persons.setValue(request.getBody().get());
+
+ return request.createResponseBuilder(HttpStatus.OK)
+ .header("Content-Type", "application/json")
+ .body(request.getBody().get())
+ .build();
+ }
+}
+```
+ # [JavaScript](#tab/javascript) The following example shows a table output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes multiple table entities.
@@ -143,6 +220,46 @@ module.exports = function (context) {
}; ```
+# [PowerShell](#tab/powershell)
+
+The following example demonstrates how to write multiple entities to a table from a function.
+
+Binding configuration in _function.json_:
+
+```json
+{
+  "bindings": [
+    {
+      "name": "InputData",
+      "type": "manualTrigger",
+      "direction": "in"
+    },
+    {
+      "tableName": "Person",
+      "connection": "MyStorageConnectionAppSetting",
+      "name": "TableBinding",
+      "type": "table",
+      "direction": "out"
+    }
+  ],
+  "disabled": false
+}
+```
+
+PowerShell code in _run.ps1_:
+
+```powershell
+param($InputData,ΓÇ»$TriggerMetadata)
+ΓÇ»
+foreach ($i in 1..10) {
+    Push-OutputBinding -Name TableBinding -Value @{
+        PartitionKey = 'Test'
+        RowKey = "$i"
+        Name = "Name $i"
+    }
+}
+```
+ # [Python](#tab/python) The following example demonstrates how to use the Table storage output binding. The `table` binding is configured in the *function.json* by assigning values to `name`, `tableName`, `partitionKey`, and `connection`:
@@ -202,83 +319,6 @@ def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
return func.HttpResponse(f"Message created with the rowKey: {rowKey}") ```
-# [Java](#tab/java)
-
-The following example shows a Java function that uses an HTTP trigger to write a single table row.
-
-```java
-public class Person {
- private String PartitionKey;
- private String RowKey;
- private String Name;
-
- public String getPartitionKey() {return this.PartitionKey;}
- public void setPartitionKey(String key) {this.PartitionKey = key; }
- public String getRowKey() {return this.RowKey;}
- public void setRowKey(String key) {this.RowKey = key; }
- public String getName() {return this.Name;}
- public void setName(String name) {this.Name = name; }
-}
-
-public class AddPerson {
-
- @FunctionName("addPerson")
- public HttpResponseMessage get(
- @HttpTrigger(name = "postPerson", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION, route="persons/{partitionKey}/{rowKey}") HttpRequestMessage<Optional<Person>> request,
- @BindingName("partitionKey") String partitionKey,
- @BindingName("rowKey") String rowKey,
- @TableOutput(name="person", partitionKey="{partitionKey}", rowKey = "{rowKey}", tableName="%MyTableName%", connection="MyConnectionString") OutputBinding<Person> person,
- final ExecutionContext context) {
-
- Person outPerson = new Person();
- outPerson.setPartitionKey(partitionKey);
- outPerson.setRowKey(rowKey);
- outPerson.setName(request.getBody().get().getName());
-
- person.setValue(outPerson);
-
- return request.createResponseBuilder(HttpStatus.OK)
- .header("Content-Type", "application/json")
- .body(outPerson)
- .build();
- }
-}
-```
-
-The following example shows a Java function that uses an HTTP trigger to write multiple table rows.
-
-```java
-public class Person {
- private String PartitionKey;
- private String RowKey;
- private String Name;
-
- public String getPartitionKey() {return this.PartitionKey;}
- public void setPartitionKey(String key) {this.PartitionKey = key; }
- public String getRowKey() {return this.RowKey;}
- public void setRowKey(String key) {this.RowKey = key; }
- public String getName() {return this.Name;}
- public void setName(String name) {this.Name = name; }
-}
-
-public class AddPersons {
-
- @FunctionName("addPersons")
- public HttpResponseMessage get(
- @HttpTrigger(name = "postPersons", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION, route="persons/") HttpRequestMessage<Optional<Person[]>> request,
- @TableOutput(name="person", tableName="%MyTableName%", connection="MyConnectionString") OutputBinding<Person[]> persons,
- final ExecutionContext context) {
-
- persons.setValue(request.getBody().get());
-
- return request.createResponseBuilder(HttpStatus.OK)
- .header("Content-Type", "application/json")
- .body(request.getBody().get())
- .build();
- }
-}
-```
- --- ## Attributes and annotations
@@ -321,19 +361,23 @@ You can use the `StorageAccount` attribute to specify the storage account at cla
Attributes are not supported by C# Script.
+# [Java](#tab/java)
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the [TableOutput](https://github.com/Azure/azure-functions-java-library/blob/master/src/main/java/com/microsoft/azure/functions/annotation/TableOutput.java/) annotation on parameters to write values into table storage.
+
+See the [example for more detail](#example).
+ # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
-# [Python](#tab/python)
+# [PowerShell](#tab/powershell)
-Attributes are not supported by Python.
+Attributes are not supported by PowerShell.
-# [Java](#tab/java)
-
-In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the [TableOutput](https://github.com/Azure/azure-functions-java-library/blob/master/src/main/java/com/microsoft/azure/functions/annotation/TableOutput.java/) annotation on parameters to write values into table storage.
+# [Python](#tab/python)
-See the [example for more detail](#example).
+Attributes are not supported by Python.
---
@@ -367,10 +411,22 @@ Access the output table entity by using a method parameter `ICollector<T> paramN
Alternatively you can use a `CloudTable` method parameter to write to the table by using the Azure Storage SDK. If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
+# [Java](#tab/java)
+
+There are two options for outputting a Table storage row from a function by using the [TableStorageOutput](/java/api/com.microsoft.azure.functions.annotation.tableoutput?view=azure-java-stablet&preserve-view=true) annotation:
+
+- **Return value**: By applying the annotation to the function itself, the return value of the function is persisted as a Table storage row.
+
+- **Imperative**: To explicitly set the message value, apply the annotation to a specific parameter of the type [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding), where `T` includes the `PartitionKey` and `RowKey` properties. These properties are often accompanied by implementing `ITableEntity` or inheriting `TableEntity`.
+ # [JavaScript](#tab/javascript) Access the output event by using `context.bindings.<name>` where `<name>` is the value specified in the `name` property of *function.json*.
+# [PowerShell](#tab/powershell)
+
+To write to table data, use the `Push-OutputBinding` cmdlet, set the `-Name TableBinding` parameter and `-Value` parameter equal to the row data. See the [PowerShell example](#example) for more detail.
+ # [Python](#tab/python) There are two options for outputting a Table storage row message from a function:
@@ -379,14 +435,6 @@ There are two options for outputting a Table storage row message from a function
- **Imperative**: Pass a value to the [set](/python/api/azure-functions/azure.functions.out?view=azure-python&preserve-view=true#set-val--t-----none) method of the parameter declared as an [Out](/python/api/azure-functions/azure.functions.out?view=azure-python&preserve-view=true) type. The value passed to `set` is persisted as an Event Hub message.
-# [Java](#tab/java)
-
-There are two options for outputting a Table storage row from a function by using the [TableStorageOutput](/java/api/com.microsoft.azure.functions.annotation.tableoutput?view=azure-java-stablet&preserve-view=true) annotation:
--- **Return value**: By applying the annotation to the function itself, the return value of the function is persisted as a Table storage row.--- **Imperative**: To explicitly set the message value, apply the annotation to a specific parameter of the type [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.outputbinding), where `T` includes the `PartitionKey` and `RowKey` properties. These properties are often accompanied by implementing `ITableEntity` or inheriting `TableEntity`.- --- ## Exceptions and return codes
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-cosmos-db-triggered-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-cosmos-db-triggered-function.md
@@ -23,6 +23,7 @@ To complete this tutorial:
> [!INCLUDE [SQL API support only](../../includes/functions-cosmosdb-sqlapi-note.md)] ## Sign in to Azure+ Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account. ## Create an Azure Cosmos DB account
@@ -31,7 +32,7 @@ You must have an Azure Cosmos DB account that uses the SQL API before you create
[!INCLUDE [cosmos-db-create-dbaccount](../../includes/cosmos-db-create-dbaccount.md)]
-## Create an Azure Function app
+## Create a function app in Azure
[!INCLUDE [Create function app Azure portal](../../includes/functions-create-function-app-portal.md)]
@@ -132,4 +133,4 @@ After the container specified in the function binding exists, you can test the f
You have created a function that runs when a document is added or modified in your Azure Cosmos DB. For more information about Azure Cosmos DB triggers, see [Azure Cosmos DB bindings for Azure Functions](functions-bindings-cosmosdb.md).
-[!INCLUDE [Next steps note](../../includes/functions-quickstart-next-steps.md)]
\ No newline at end of file
+[!INCLUDE [Next steps note](../../includes/functions-quickstart-next-steps.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-kotlin-maven https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-first-kotlin-maven.md
@@ -1,6 +1,6 @@
---
-title: Create your first function in Azure with Kotlin and Maven
-description: Create and publish an HTTP triggered function to Azure with Kotlin and Maven.
+title: Create a Kotlin function in Azure Functions using Maven
+description: Create and publish an HTTP triggered function app to Azure Functions with Kotlin and Maven.
author: dglover ms.service: azure-functions ms.topic: quickstart
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-maven-intellij https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-maven-intellij.md
@@ -1,6 +1,6 @@
---
-title: Create an Azure function with Java and IntelliJ
-description: Learn how to create and publish a simple HTTP-triggered, serverless app on Azure with Java and IntelliJ.
+title: Create a Java function in Azure Functions using IntelliJ
+description: Learn how to use IntelliJ to create a simple HTTP-triggered Java function, which you then publish to run in a serverless environment in Azure.
author: jeffhollan ms.topic: how-to ms.date: 07/01/2018
@@ -8,11 +8,11 @@ ms.author: jehollan
ms.custom: mvc, devcenter, devx-track-java ---
-# Create your first Azure function with Java and IntelliJ
+# Create your first Java function in Azure using IntelliJ
This article shows you:-- How to create a [serverless](https://azure.microsoft.com/overview/serverless-computing/) function project with IntelliJ IDEA-- Steps for testing and debugging the function in the integrated development environment (IDE) on your own computer
+- How to create an HTTP-triggered Java function in an IntelliJ IDEA project.
+- Steps for testing and debugging the project in the integrated development environment (IDE) on your own computer.
- Instructions for deploying the function project to Azure Functions <!-- TODO ![Access a Hello World function from the command line with cURL](media/functions-create-java-maven/hello-azure.png) -->
@@ -21,7 +21,7 @@ This article shows you:
## Set up your development environment
-To develop a function with Java and IntelliJ, install the following software:
+To create and publish Java functions to Azure using IntelliJ, install the following software:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-jdk-long-term-support) for Java 8
@@ -30,7 +30,7 @@ To develop a function with Java and IntelliJ, install the following software:
+ Latest [Function Core Tools](https://github.com/Azure/azure-functions-core-tools)
-## Installation and Sign-in
+## Installation and sign in
1. In IntelliJ IDEA's Settings/Preferences dialog (Ctrl+Alt+S), select **Plugins**. Then, find the **Azure Toolkit for IntelliJ** in the **Marketplace** and click **Install**. After installed, click **Restart** to activate the plugin.
@@ -61,73 +61,73 @@ In this section, you use Azure Toolkit for IntelliJ to create a local Azure Func
1. Open IntelliJ Welcome dialog, select *Create New Project* to open a new Project wizard, select *Azure Functions*.
- ![Create functions project](media/functions-create-first-java-intellij/create-functions-project.png)
+ ![Create function project](media/functions-create-first-java-intellij/create-functions-project.png)
1. Select *Http Trigger*, then click *Next* and follow the wizard to go through all the configurations in the following pages; confirm your project location then click *Finish*; Intellj IDEA will then open your new project.
- ![Create functions project finish](media/functions-create-first-java-intellij/create-functions-project-finish.png)
+ ![Create function project finish](media/functions-create-first-java-intellij/create-functions-project-finish.png)
-## Run the Function App locally
+## Run the project locally
1. Navigate to `src/main/java/org/example/functions/HttpTriggerFunction.java` to see the code generated. Beside the line *17*, you will notice that there is a green *Run* button, click it and select *Run 'azure-function-exam...'*, you will see that your function app is running locally with a few logs.
- ![Local run functions project](media/functions-create-first-java-intellij/local-run-functions-project.png)
+ ![Local run project](media/functions-create-first-java-intellij/local-run-functions-project.png)
- ![Local run functions output](media/functions-create-first-java-intellij/local-run-functions-output.png)
+ ![Local run project output](media/functions-create-first-java-intellij/local-run-functions-output.png)
1. You can try the function by accessing the printed endpoint from browser, like `http://localhost:7071/api/HttpTrigger-Java?name=Azure`.
- ![Local run functions test result](media/functions-create-first-java-intellij/local-run-functions-test.png)
+ ![Local run function test result](media/functions-create-first-java-intellij/local-run-functions-test.png)
-1. The log is also printed out in your IDEA, now, stop the function by clicking the *stop* button.
+1. The log is also printed out in your IDEA, now, stop the function app by clicking the *stop* button.
- ![Local run functions test log](media/functions-create-first-java-intellij/local-run-functions-log.png)
+ ![Local run function test log](media/functions-create-first-java-intellij/local-run-functions-log.png)
-## Debug the Function App locally
+## Debug the project locally
-1. Now let's try to debug your Function App locally, click the *Debug* button in the toolbar (if you don't see it, click *View -> Appearance -> Toolbar* to enable Toolbar).
+1. To debug the function code in your project locally, select the *Debug* button in the toolbar. If you don't see the toolbar, enable it by choosing **View** > **Appearance** > **Toolbar**.
- ![Local debug functions button](media/functions-create-first-java-intellij/local-debug-functions-button.png)
+ ![Local debug function app button](media/functions-create-first-java-intellij/local-debug-functions-button.png)
1. Click on line *20* of the file `src/main/java/org/example/functions/HttpTriggerFunction.java` to add a breakpoint, access the endpoint `http://localhost:7071/api/HttpTrigger-Java?name=Azure` again , you will find the breakpoint is hit, you can try more debug features like *step*, *watch*, *evaluation*. Stop the debug session by click the stop button.
- ![Local debug functions break](media/functions-create-first-java-intellij/local-debug-functions-break.png)
+ ![Local debug function app break](media/functions-create-first-java-intellij/local-debug-functions-break.png)
-## Deploy your Function App to Azure
+## Deploy your project to Azure
1. Right click your project in IntelliJ Project explorer, select *Azure -> Deploy to Azure Functions*
- ![Deploy functions to Azure](media/functions-create-first-java-intellij/deploy-functions-to-azure.png)
+ ![Deploy project to Azure](media/functions-create-first-java-intellij/deploy-functions-to-azure.png)
1. If you don't have any Function App yet, click *No available function, click to create a new one*.
- ![Deploy functions to Azure create app](media/functions-create-first-java-intellij/deploy-functions-create-app.png)
+ ![Create function app in Azure](media/functions-create-first-java-intellij/deploy-functions-create-app.png)
-1. Type in the Function app name and choose proper subscription/platform/resource group/App Service plan, you can also create resource group/App Service plan here. Then, keep app settings unchanged, click *OK* and wait some minutes for the new function to be created. After *Creating New Function App...* progress bar disappears.
+1. Type in the function app name and choose proper subscription/platform/resource group/App Service plan, you can also create resource group/App Service plan here. Then, keep app settings unchanged, click *OK* and wait some minutes for the new function app to be created. After *Creating New Function App...* progress bar disappears.
- ![Deploy functions to Azure create app wizard](media/functions-create-first-java-intellij/deploy-functions-create-app-wizard.png)
+ ![Deploy function app to Azure create app wizard](media/functions-create-first-java-intellij/deploy-functions-create-app-wizard.png)
1. Select the function app you want to deploy to, (the new function app you just created will be automatically selected). Click *Run* to deploy your functions. ![Screenshot shows the Deploy Azure Functions dialog box.](media/functions-create-first-java-intellij/deploy-functions-run.png)
- ![Deploy functions to Azure log](media/functions-create-first-java-intellij/deploy-functions-log.png)
+ ![Deploy function app to Azure log](media/functions-create-first-java-intellij/deploy-functions-log.png)
-## Manage Azure Functions from IDEA
+## Manage function apps from IDEA
-1. You can manage your functions with *Azure Explorer* in your IDEA, click on *Function App*, you will see all your functions here.
+1. You can manage your function apps with *Azure Explorer* in your IDEA, click on *Function App*, you will see all your function apps here.
- ![View functions in explorer](media/functions-create-first-java-intellij/explorer-view-functions.png)
+ ![View function apps in explorer](media/functions-create-first-java-intellij/explorer-view-functions.png)
-1. Click to select on one of your functions, and right click, select *Show Properties* to open the detail page.
+1. Click to select on one of your function apps, and right click, select *Show Properties* to open the detail page.
- ![Show functions properties](media/functions-create-first-java-intellij/explorer-functions-show-properties.png)
+ ![Show function app properties](media/functions-create-first-java-intellij/explorer-functions-show-properties.png)
-1. Right click on your Function *HttpTrigger-Java*, and select *Trigger Function*, you will see that the browser is opened with the trigger URL.
+1. Right click on your *HttpTrigger-Java* function app, and select *Trigger Function*, you will see that the browser is opened with the trigger URL.
![Screenshot shows a browser with the U R L.](media/functions-create-first-java-intellij/explorer-trigger-functions.png)
-## Add more Functions to the project
+## Add more functions to the project
1. Right click on the package *org.example.functions* and select *New -> Azure Function Class*.
@@ -139,16 +139,16 @@ In this section, you use Azure Toolkit for IntelliJ to create a local Azure Func
![Add functions to the project output](media/functions-create-first-java-intellij/add-functions-output.png)
-## Cleaning Up Functions
+## Cleaning up functions
-1. Deleting Azure Functions in Azure Explorer
+1. Deleting functions in Azure Explorer
![Screenshot shows Delete selected from a context menu.](media/functions-create-first-java-intellij/delete-function.png) ## Next steps
-You've created a Java functions project with an HTTP triggered function, run it on your local machine, and deployed it to Azure. Now, extend your function by...
+You've created a Java project with an HTTP triggered function, run it on your local machine, and deployed it to Azure. Now, extend your function by...
> [!div class="nextstepaction"] > [Adding an Azure Storage queue output binding](./functions-add-output-binding-storage-queue-java.md)
@@ -159,4 +159,4 @@ You've created a Java functions project with an HTTP triggered function, run it
[intellij-azure-popup]: media/functions-create-first-java-intellij/intellij-azure-login-popup.png [intellij-azure-copycode]: media/functions-create-first-java-intellij/intellij-azure-login-copyopen.png [intellij-azure-link-ms-account]: media/functions-create-first-java-intellij/intellij-azure-login-linkms-account.png
-[intellij-azure-login-select-subs]: media/functions-create-first-java-intellij/intellij-azure-login-selectsubs.png
\ No newline at end of file
+[intellij-azure-login-select-subs]: media/functions-create-first-java-intellij/intellij-azure-login-selectsubs.png
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-maven-kotlin-intellij https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-maven-kotlin-intellij.md
@@ -1,6 +1,6 @@
---
-title: Create an Azure function with Kotlin and IntelliJ
-description: Learn how to create and publish a simple HTTP-triggered, serverless app on Azure with Kotlin and IntelliJ.
+title: Create a Kotlin function in Azure Functions using IntelliJ
+description: Learn how to use IntelliJ to create a simple HTTP-triggered Kotlin function, which you then publish to run in a serverless environment in Azure.
author: dglover ms.service: azure-functions ms.topic: quickstart
@@ -8,15 +8,15 @@ ms.date: 03/25/2020
ms.author: dglover ---
-# Quickstart: Create your first HTTP triggered function with Kotlin and IntelliJ
+# Create your first Kotlin function in Azure using IntelliJ
-This article shows you how to create a [serverless](https://azure.microsoft.com/overview/serverless-computing/) function project with IntelliJ IDEA and Apache Maven. It also shows how to locally debug your function code in the integrated development environment (IDE) and then deploy the function project to Azure.
+This article shows you how to create an HTTP-triggered Java function in an IntelliJ IDEA project, run and debug the project in the integrated development environment (IDE), and finally deploy the function project to a function app in Azure.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Set up your development environment
-To develop a function with Kotlin and IntelliJ, install the following software:
+To create and publish Kotlin functions to Azure using IntelliJ, install the following software:
- [Java Developer Kit](/azure/developer/java/fundamentals/java-jdk-long-term-support) (JDK), version 8 - [Apache Maven](https://maven.apache.org), version 3.0 or higher
@@ -27,7 +27,7 @@ To develop a function with Kotlin and IntelliJ, install the following software:
> [!IMPORTANT] > The JAVA_HOME environment variable must be set to the install location of the JDK to complete the steps in this article.
-## Create a Functions project
+## Create a function project
1. In IntelliJ IDEA, select **Create New Project**. 1. In the **New Project** window, select **Maven** from the left pane.
@@ -42,10 +42,10 @@ To develop a function with Kotlin and IntelliJ, install the following software:
Maven creates the project files in a new folder with the same name as the _ArtifactId_ value. The project's generated code is a simple [HTTP-triggered](./functions-bindings-http-webhook.md) function that echoes the body of the triggering HTTP request.
-## Run functions locally in the IDE
+## Run project locally in the IDE
> [!NOTE]
-> To run and debug functions locally, make sure you've installed [Azure Functions Core Tools, version 2](functions-run-local.md#v2).
+> To run and debug the project locally, make sure you've installed [Azure Functions Core Tools, version 2](functions-run-local.md#v2).
1. Import changes manually or enable [auto import](https://www.jetbrains.com/help/idea/creating-and-optimizing-imports.html). 1. Open the **Maven Projects** toolbar.
@@ -55,7 +55,7 @@ Maven creates the project files in a new folder with the same name as the _Artif
1. Close the run dialog box when you're done testing your function. Only one function host can be active and running locally at a time.
-## Debug the function in IntelliJ
+## Debug the project in IntelliJ
1. To start the function host in debug mode, add **-DenableDebug** as the argument when you run your function. You can either change the configuration in [maven goals](https://www.jetbrains.com/help/idea/maven-support.html#run_goal) or run the following command in a terminal window:
@@ -70,25 +70,25 @@ Maven creates the project files in a new folder with the same name as the _Artif
1. Complete the _Name_ and _Settings_ fields, and then select **OK** to save the configuration. 1. After setup, select **Debug < Remote Configuration Name >** or press Shift+F9 on your keyboard to start debugging.
- ![Debug functions in IntelliJ](media/functions-create-first-kotlin-intellij/debug-configuration-intellij.PNG)
+ ![Debug project in IntelliJ](media/functions-create-first-kotlin-intellij/debug-configuration-intellij.PNG)
1. When you're finished, stop the debugger and the running process. Only one function host can be active and running locally at a time.
-## Deploy the function to Azure
+## Deploy the project to Azure
-1. Before you can deploy your function to Azure, you must [log in by using the Azure CLI](/cli/azure/authenticate-azure-cli?view=azure-cli-latest).
+1. Before you can deploy your project to a function app in Azure, you must [log in by using the Azure CLI](/cli/azure/authenticate-azure-cli?view=azure-cli-latest).
``` azurecli az login ```
-1. Deploy your code into a new function by using the `azure-functions:deploy` Maven target. You can also select the **azure-functions:deploy** option in the Maven Projects window.
+1. Deploy your code into a new function app by using the `azure-functions:deploy` Maven target. You can also select the **azure-functions:deploy** option in the Maven Projects window.
``` mvn azure-functions:deploy ```
-1. Find the URL for your function in the Azure CLI output after the function has been successfully deployed.
+1. Find the URL for your HTTP trigger function in the Azure CLI output after the function app has been successfully deployed.
``` output [INFO] Successfully deployed Function App with package.
@@ -100,5 +100,5 @@ Maven creates the project files in a new folder with the same name as the _Artif
## Next steps
-Now that you have deployed your first Kotlin function to Azure, review the [Java Functions developer guide](functions-reference-java.md) for more information on developing Java and Kotlin functions.
-- Add additional functions with different triggers to your project by using the `azure-functions:add` Maven target.\ No newline at end of file
+Now that you have deployed your first Kotlin function app to Azure, review the [Azure Functions Java developer guide](functions-reference-java.md) for more information on developing Java and Kotlin functions.
+- Add additional function apps with different triggers to your project by using the `azure-functions:add` Maven target.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-scheduled-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-scheduled-function.md
@@ -1,15 +1,15 @@
---
-title: Create a function that runs on a schedule in Azure
-description: Learn how to create a function in Azure that runs based on a schedule that you define.
+title: Create a function in Azure that runs on a schedule
+description: Learn how to use the Azure portal to create a function that runs based on a schedule that you define.
ms.assetid: ba50ee47-58e0-4972-b67b-828f2dc48701 ms.topic: how-to ms.date: 04/16/2020 ms.custom: mvc, cc996988-fb4f-47 ---
-# Create a function in Azure that is triggered by a timer
+# Create a function in the Azure portal that runs on a schedule
-Learn how to use Azure Functions to create a [serverless](https://azure.microsoft.com/solutions/serverless/) function that runs based on a schedule that you define.
+Learn how to use the Azure portal to create a function that runs [serverless](https://azure.microsoft.com/solutions/serverless/) on Azure based on a schedule that you define.
## Prerequisites
@@ -17,7 +17,7 @@ To complete this tutorial:
+ If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Create an Azure Function app
+## Create a function app
[!INCLUDE [Create function app Azure portal](../../includes/functions-create-function-app-portal.md)]
@@ -68,7 +68,7 @@ Now, you change the function's schedule so that it runs once every hour instead
1. Update the **Schedule** value to `0 0 */1 * * *`, and then select **Save**.
- :::image type="content" source="./media/functions-create-scheduled-function/function-edit-timer-schedule.png" alt-text="Functions update timer schedule in the Azure portal." border="true":::
+ :::image type="content" source="./media/functions-create-scheduled-function/function-edit-timer-schedule.png" alt-text="Update function timer schedule in the Azure portal." border="true":::
You now have a function that runs once every hour, on the hour.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-github-actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-github-actions.md
@@ -10,9 +10,9 @@ ms.custom: "devx-track-csharp, devx-track-python, github-actions-azure"
# Continuous delivery by using GitHub Action
-Use [GitHub Actions](https://github.com/features/actions) to define a workflow to automatically build and deploy code to your Azure function app.
+Use [GitHub Actions](https://github.com/features/actions) to define a workflow to automatically build and deploy code to your function app in Azure Functions.
-In GitHub Actions, a [workflow](https://docs.github.com/en/free-pro-team@latest/actions/learn-github-actions/introduction-to-github-actions#the-components-of-github-actions) is an automated process that you define in your GitHub repository. This process tells GitHub how to build and deploy your functions app project on GitHub.
+In GitHub Actions, a [workflow](https://docs.github.com/en/free-pro-team@latest/actions/learn-github-actions/introduction-to-github-actions#the-components-of-github-actions) is an automated process that you define in your GitHub repository. This process tells GitHub how to build and deploy your function app project on GitHub.
A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
@@ -182,6 +182,7 @@ The following example shows the part of the workflow that builds the function ap
--- ## Deploy the function app+ Use the `Azure/functions-action` action to deploy your code to a function app. This action has three parameters: |Parameter |Explanation |
@@ -197,7 +198,7 @@ The following example uses version 1 of the `functions-action` and a `publish pr
Set up a .NET Linux workflow that uses a publish profile. ```yaml
-name: Deploy DotNet project to Azure function app with a Linux environment
+name: Deploy DotNet project to function app with a Linux environment
on: [push]
@@ -236,7 +237,7 @@ jobs:
Set up a .NET Windows workflow that uses a publish profile. ```yaml
-name: Deploy DotNet project to Azure function app with a Windows environment
+name: Deploy DotNet project to function app with a Windows environment
on: [push]
@@ -278,7 +279,7 @@ jobs:
Set up a Java Linux workflow that uses a publish profile. ```yaml
-name: Deploy Java project to Azure Function App
+name: Deploy Java project to function app
on: [push]
@@ -320,7 +321,7 @@ jobs:
Set up a Java Windows workflow that uses a publish profile. ```yaml
-name: Deploy Java project to Azure Function App
+name: Deploy Java project to function app
on: [push]
@@ -364,7 +365,7 @@ jobs:
Set up a Node.JS Linux workflow that uses a publish profile. ```yaml
-name: Deploy Node.js project to Azure Function App
+name: Deploy Node.js project to function app
on: [push]
@@ -406,7 +407,7 @@ jobs:
Set up a Node.JS Windows workflow that uses a publish profile. ```yaml
-name: Deploy Node.js project to Azure Function App
+name: Deploy Node.js project to function app
on: [push]
@@ -450,7 +451,7 @@ jobs:
Set up a Python Linux workflow that uses a publish profile. ```yaml
-name: Deploy Python project to Azure Function App
+name: Deploy Python project to function app
on: [push]
@@ -493,4 +494,4 @@ jobs:
## Next steps > [!div class="nextstepaction"]
-> [Learn more about Azure and GitHub integration](/azure/developer/github/)
\ No newline at end of file
+> [Learn more about Azure and GitHub integration](/azure/developer/github/)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
@@ -1,6 +1,6 @@
---
-title: Configure function app settings in Azure
-description: Learn how to configure Azure function app settings.
+title: Configure function app settings in Azure Functions
+description: Learn how to configure function app settings in Azure Functions.
ms.assetid: 81eb04f8-9a27-45bb-bf24-9ab6c30d205c ms.topic: conceptual ms.date: 04/13/2020
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-infrastructure-as-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
@@ -136,7 +136,7 @@ A function app must include these application settings:
| Setting name | Description | Example values | |------------------------------|-------------------------------------------------------------------------------------------|---------------------------------------| | AzureWebJobsStorage | A connection string to a storage account that the Functions runtime uses for internal queueing | See [Storage account](#storage) |
-| FUNCTIONS_EXTENSION_VERSION | The version of the Azure Functions runtime | `~2` |
+| FUNCTIONS_EXTENSION_VERSION | The version of the Azure Functions runtime | `~3` |
| FUNCTIONS_WORKER_RUNTIME | The language stack to be used for functions in this app | `dotnet`, `node`, `java`, `python`, or `powershell` | | WEBSITE_NODE_DEFAULT_VERSION | Only needed if using the `node` language stack, specifies the version to use | `10.14.1` |
@@ -160,7 +160,7 @@ These properties are specified in the `appSettings` collection in the `siteConfi
}, { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~2"
+ "value": "~3"
} ] }
@@ -247,7 +247,7 @@ On Windows, a Consumption plan requires two additional settings in the site conf
}, { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~2"
+ "value": "~3"
} ] }
@@ -286,7 +286,7 @@ On Linux, the function app must have its `kind` set to `functionapp,linux`, and
}, { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~2"
+ "value": "~3"
} ] },
@@ -367,7 +367,7 @@ A function app on a Premium plan must have the `serverFarmId` property set to th
}, { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~2"
+ "value": "~3"
} ] }
@@ -455,7 +455,7 @@ A function app on an App Service plan must have the `serverFarmId` property set
}, { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~2"
+ "value": "~3"
} ] }
@@ -463,13 +463,13 @@ A function app on an App Service plan must have the `serverFarmId` property set
} ```
-Linux apps should also include a `linuxFxVersion` property under `siteConfig`. If you are just deploying code, the value for this is determined by your desired runtime stack:
+Linux apps should also include a `linuxFxVersion` property under `siteConfig`. If you are just deploying code, the value for this is determined by your desired runtime stack in the format of ```runtime|runtimeVersion```:
| Stack | Example value | |------------------|-------------------------------------------------------|
-| Python | `DOCKER|microsoft/azure-functions-python3.6:2.0` |
-| JavaScript | `DOCKER|microsoft/azure-functions-node8:2.0` |
-| .NET | `DOCKER|microsoft/azure-functions-dotnet-core2.0:2.0` |
+| Python | `python|3.7` |
+| JavaScript | `node|12` |
+| .NET | `dotnet|3.0` |
```json {
@@ -500,10 +500,10 @@ Linux apps should also include a `linuxFxVersion` property under `siteConfig`. I
}, { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~2"
+ "value": "~3"
} ],
- "linuxFxVersion": "DOCKER|microsoft/azure-functions-node8:2.0"
+ "linuxFxVersion": "node|12"
} } }
@@ -540,7 +540,7 @@ If you are [deploying a custom container image](./functions-create-function-linu
}, { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~2"
+ "value": "~3"
}, { "name": "DOCKER_REGISTRY_SERVER_URL",
@@ -590,7 +590,7 @@ A function app has many child resources that you can use in your deployment, inc
"appSettings": [ { "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~2"
+ "value": "~3"
}, { "name": "Project",
@@ -612,7 +612,7 @@ A function app has many child resources that you can use in your deployment, inc
"properties": { "AzureWebJobsStorage": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]", "AzureWebJobsDashboard": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
- "FUNCTIONS_EXTENSION_VERSION": "~2",
+ "FUNCTIONS_EXTENSION_VERSION": "~3",
"FUNCTIONS_WORKER_RUNTIME": "dotnet", "Project": "src" }
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-mount-files-storage-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-mount-files-storage-linux.md
@@ -21,9 +21,9 @@ This Azure Functions sample script creates a function app and creates a share in
## Sample script
-This script creates an Azure Function app using the [Consumption plan](../consumption-plan.md).
+This script creates a function app in Azure Functions using the [Consumption plan](../consumption-plan.md).
-[!code-azurecli-interactive[main](../../../cli_scripts/azure-functions/functions-cli-mount-files-storage-linux/functions-cli-mount-files-storage-linux.sh "Create an Azure Function on a Consumption plan")]
+[!code-azurecli-interactive[main](../../../cli_scripts/azure-functions/functions-cli-mount-files-storage-linux/functions-cli-mount-files-storage-linux.sh "Create a function app on a Consumption plan")]
[!INCLUDE [cli-script-clean-up](../../../includes/cli-script-clean-up.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-baseline.md
@@ -52,16 +52,16 @@ If using Network Security groups (NSGs) with your Azure Functions implementation
### 1.3: Protect critical web applications
-**Guidance**: To fully secure your Azure Function endpoints in production, you should consider implementing one of the following function app-level security options:
+**Guidance**: To fully secure your Azure Functions endpoints in production, you should consider implementing one of the following function app-level security options:
- Turn on App Service Authentication / Authorization for your function app, - Use Azure API Management (APIM) to authenticate requests, or - Deploy your function app to an Azure App Service Environment.
-In addition, ensure remote debugging has been disabled for your production Azure Functions. Furthermore, Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Azure Function app. Allow only required domains to interact with your Azure Function app.
+In addition, ensure remote debugging has been disabled for your production Azure Functions. Furthermore, Cross-Origin Resource Sharing (CORS) should not allow all domains to access your function app in Azure. Allow only required domains to interact with your function app.
Consider deploying Azure Web Application Firewall (WAF) as part of the networking configuration for additional inspection of incoming traffic. Enable Diagnostic Setting for WAF and ingest logs into a Storage Account, Event Hub, or Log Analytics Workspace. -- [How to secure Azure Function endpoints in production](./functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production)
+- [How to secure Azure Functions endpoints in production](./functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production)
- [How to deploy Azure WAF](../web-application-firewall/ag/create-waf-policy-ag.md)
@@ -72,7 +72,7 @@ Consider deploying Azure Web Application Firewall (WAF) as part of the networkin
### 1.4: Deny communications with known malicious IP addresses **Guidance**: Enable DDoS Protection Standard on the Virtual Networks associated with your functions apps to guard against DDoS attacks. Use Azure Security Center Integrated Threat Intelligence to deny communications with known malicious or unused public IP addresses.
-In addition, configure a front-end gateway, such as Azure Web Application Firewall, to authenticate all incoming requests and filter out malicious traffic. Azure Web Application Firewall can help secure your Azure Function apps by inspecting inbound web traffic to block SQL injections, Cross-Site Scripting, malware uploads, and DDoS attacks. Introduction of a WAF requires either an App Service Environment or use of Private Endpoints (Preview). Ensure that Private Endpoints are no longer in (Preview) before using them with production workloads.
+In addition, configure a front-end gateway, such as Azure Web Application Firewall, to authenticate all incoming requests and filter out malicious traffic. Azure Web Application Firewall can help secure your function app by inspecting inbound web traffic to block SQL injections, Cross-Site Scripting, malware uploads, and DDoS attacks. Introduction of a WAF requires either an App Service Environment or use of Private Endpoints (Preview). Ensure that Private Endpoints are no longer in (Preview) before using them with production workloads.
- [Azure Functions networking options](./functions-networking-options.md)
@@ -171,9 +171,9 @@ Alternatively, there are multiple marketplace options like the Barracuda WAF for
### 1.9: Maintain standard security configurations for network devices **Guidance**: Define and implement standard security configurations for network settings related to your Azure Functions. Use Azure Policy aliases in the "Microsoft.Web" and "Microsoft.Network" namespaces to create custom policies to audit or enforce the network configuration of your Azure Functions. You may also make use of built-in policy definitions for Azure Functions, such as:-- CORS should not allow every resource to access your Function Apps-- Function App should only be accessible over HTTPS-- Latest TLS version should be used in your Function App
+- CORS should not allow every resource to access your function apps
+- Function app should only be accessible over HTTPS
+- Latest TLS version should be used in your function app
You may also use Azure Blueprints to simplify large-scale Azure deployments by packaging key environment artifacts, such as Azure Resource Manager templates, Azure role-based access control (Azure RBAC), and policies in a single blueprint definition. You can easily apply the blueprint to new subscriptions, environments, and fine-tune control and management through versioning.
@@ -229,7 +229,7 @@ You may use Azure PowerShell or Azure CLI to look-up or perform actions on resou
Azure Functions also offers built-in integration with Azure Application Insights to monitor functions. Application Insights collects log, performance, and error data. It automatically detects performance anomalies and includes powerful analytics tools to help you diagnose issues and to understand how your functions are used.
-If you have built-in custom security/audit logging within your Azure Function app, enable the diagnostics setting "FunctionAppLogs" and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive.
+If you have built-in custom security/audit logging within your function app, enable the diagnostics setting "FunctionAppLogs" and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive.
Optionally, you may enable and on-board data to Azure Sentinel or a third-party SIEM.
@@ -249,7 +249,7 @@ Optionally, you may enable and on-board data to Azure Sentinel or a third-party
**Guidance**: For control plane audit logging, enable Azure Activity Log diagnostic settings and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive. Using Azure Activity Log data, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) performed at the control plane level for your Azure resources.
-If you have built-in custom security/audit logging within your Azure Function app, enable the diagnostics setting "FunctionAppLogs" and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive.
+If you have built-in custom security/audit logging within your function app, enable the diagnostics setting "FunctionAppLogs" and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive.
- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/platform/activity-log.md)
@@ -269,7 +269,7 @@ If you have built-in custom security/audit logging within your Azure Function ap
### 2.5: Configure security log storage retention
-**Guidance**: In Azure Monitor, set log retention period for Log Analytics workspaces associated with your Azure Functions apps according to your organization's compliance regulations.
+**Guidance**: In Azure Monitor, set log retention period for Log Analytics workspaces associated with your function apps according to your organization's compliance regulations.
- [How to set log retention parameters](../azure-monitor/platform/manage-cost-storage.md#change-the-data-retention-period)
@@ -279,11 +279,11 @@ If you have built-in custom security/audit logging within your Azure Function ap
### 2.6: Monitor and review Logs
-**Guidance**: Enable Azure Activity Log diagnostic settings as well as the diagnostic settings for your Azure Functions app and send the logs to a Log Analytics workspace. Perform queries in Log Analytics to search terms, identify trends, analyze patterns, and provide many other insights based on the collected data.
+**Guidance**: Enable Azure Activity Log diagnostic settings as well as the diagnostic settings for your function app and send the logs to a Log Analytics workspace. Perform queries in Log Analytics to search terms, identify trends, analyze patterns, and provide many other insights based on the collected data.
-Enable Application Insights for your Azure Functions apps to collect log, performance, and error data. You can view the telemetry data collected by Application Insights within the Azure portal.
+Enable Application Insights for your function apps to collect log, performance, and error data. You can view the telemetry data collected by Application Insights within the Azure portal.
-If you have built-in custom security/audit logging within your Azure Function app, enable the diagnostics setting "FunctionAppLogs" and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive.
+If you have built-in custom security/audit logging within your function app, enable the diagnostics setting "FunctionAppLogs" and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive.
Optionally, you may enable and on-board data to Azure Sentinel or a third-party SIEM.
@@ -301,9 +301,9 @@ Optionally, you may enable and on-board data to Azure Sentinel or a third-party
### 2.7: Enable alerts for anomalous activity
-**Guidance**: Enable Azure Activity Log diagnostic settings as well as the diagnostic settings for your Azure Functions app and send the logs to a Log Analytics workspace. Perform queries in Log Analytics to search terms, identify trends, analyze patterns, and provide many other insights based on the collected data. You can create alerts based on your Log Analytics workspace queries.
+**Guidance**: Enable Azure Activity Log diagnostic settings as well as the diagnostic settings for your function app and send the logs to a Log Analytics workspace. Perform queries in Log Analytics to search terms, identify trends, analyze patterns, and provide many other insights based on the collected data. You can create alerts based on your Log Analytics workspace queries.
-Enable Application Insights for your Azure Functions apps to collect log, performance, and error data. You can view the telemetry data collected by Application Insights and create alerts within the Azure portal.
+Enable Application Insights for your function apps to collect log, performance, and error data. You can view the telemetry data collected by Application Insights and create alerts within the Azure portal.
Optionally, you may enable and on-board data to Azure Sentinel or a third-party SIEM.
@@ -323,7 +323,7 @@ Optionally, you may enable and on-board data to Azure Sentinel or a third-party
### 2.8: Centralize anti-malware logging
-**Guidance**: Not applicable; Azure Functions apps do not process or produce anti-malware related logs.
+**Guidance**: Not applicable; function apps do not process or produce anti-malware related logs.
**Azure Security Center monitoring**: Not applicable
@@ -331,7 +331,7 @@ Optionally, you may enable and on-board data to Azure Sentinel or a third-party
### 2.9: Enable DNS query logging
-**Guidance**: Not applicable; Azure Functions apps do not process or produce user accessible DNS-related logs.
+**Guidance**: Not applicable; function apps do not process or produce user accessible DNS-related logs.
**Azure Security Center monitoring**: Not applicable
@@ -398,7 +398,7 @@ External accounts with owner permissions should be removed from your subscriptio
### 3.4: Use single sign-on (SSO) with Azure Active Directory
-**Guidance**: Wherever possible, use Azure Active Directory SSO instead than configuring individual stand-alone credentials for data access to your function app. Use Azure Security Center Identity and Access Management recommendations. Implement single sign-on for your Azure Functions apps using the App Service Authentication / Authorization feature.
+**Guidance**: Wherever possible, use Azure Active Directory SSO instead than configuring individual stand-alone credentials for data access to your function app. Use Azure Security Center Identity and Access Management recommendations. Implement single sign-on for your function apps using the App Service Authentication / Authorization feature.
- [Understand authentication and authorization in Azure Functions](../app-service/overview-authentication-authorization.md#identity-providers)
@@ -458,9 +458,9 @@ In addition, use Azure AD risk detections to view alerts and reports on risky us
### 3.9: Use Azure Active Directory
-**Guidance**: Use Azure Active Directory (AD) as the central authentication and authorization system for your Azure Functions apps. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
+**Guidance**: Use Azure Active Directory (AD) as the central authentication and authorization system for your function apps. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
-- [How to configure your Azure Functions app to use Azure AD login](../app-service/configure-authentication-provider-aad.md)
+- [How to configure your function app to use Azure AD login](../app-service/configure-authentication-provider-aad.md)
- [How to create and configure an Azure AD instance](../active-directory/fundamentals/active-directory-access-create-new-tenant.md)
@@ -482,13 +482,13 @@ In addition, use Azure AD risk detections to view alerts and reports on risky us
### 3.11: Monitor attempts to access deactivated accounts
-**Guidance**: Use Azure Active Directory (AD) as the central authentication and authorization system for your Azure Function apps. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
+**Guidance**: Use Azure Active Directory (AD) as the central authentication and authorization system for your function apps. Azure AD protects data by using strong encryption for data at rest and in transit. Azure AD also salts, hashes, and securely stores user credentials.
You have access to Azure AD sign-in activity, audit and risk event log sources, which allow you to integrate with Azure Sentinel or a third-party SIEM. You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics workspace. You can configure desired log alerts within Log Analytics. -- [How to configure your Azure Functions app to use Azure AD login](../app-service/configure-authentication-provider-aad.md)
+- [How to configure your function app to use Azure AD login](../app-service/configure-authentication-provider-aad.md)
- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
@@ -500,7 +500,7 @@ You can streamline this process by creating diagnostic settings for Azure AD use
### 3.12: Alert on account login behavior deviation
-**Guidance**: Use Azure Active Directory (AD) as the central authentication and authorization system for your Azure Functions apps. For account login behavior deviation on the control plane (the Azure portal), use Azure Active Directory (AD) Identity Protection and risk detection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation.
+**Guidance**: Use Azure Active Directory (AD) as the central authentication and authorization system for your function apps. For account login behavior deviation on the control plane (the Azure portal), use Azure Active Directory (AD) Identity Protection and risk detection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation.
- [How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
@@ -538,9 +538,9 @@ You can streamline this process by creating diagnostic settings for Azure AD use
### 4.2: Isolate systems storing or processing sensitive information
-**Guidance**: Implement separate subscriptions and/or management groups for development, test, and production. Azure Function apps should be separated by virtual network (VNet)/subnet and tagged appropriately.
+**Guidance**: Implement separate subscriptions and/or management groups for development, test, and production. function apps should be separated by virtual network (VNet)/subnet and tagged appropriately.
-You may also use Private Endpoints to perform network isolation. An Azure Private Endpoint is a network interface that connects you privately and securely to a service (for example: Azure Functions app HTTPs endpoint) powered by Azure Private Link. Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. Private endpoints are in (Preview) for function apps running in the Premium plan. Ensure that Private Endpoints are no longer in (Preview) before using them with production workloads.
+You may also use Private Endpoints to perform network isolation. An Azure Private Endpoint is a network interface that connects you privately and securely to a service (for example: function app HTTPs endpoint) powered by Azure Private Link. Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. Private endpoints are in (Preview) for function apps running in the Premium plan. Ensure that Private Endpoints are no longer in (Preview) before using them with production workloads.
- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
@@ -574,7 +574,7 @@ Microsoft manages the underlying infrastructure for Azure Functions and has impl
### 4.4: Encrypt all sensitive information in transit
-**Guidance**: In the Azure portal for your Azure Function apps, under "Platform Features: Networking: SSL", enable the "HTTPs Only" setting and set the minimum TLS version to 1.2.
+**Guidance**: In the Azure portal for your function apps, under "Platform Features: Networking: SSL", enable the "HTTPs Only" setting and set the minimum TLS version to 1.2.
**Azure Security Center monitoring**: Yes
@@ -594,7 +594,7 @@ For the underlying platform which is managed by Microsoft, Microsoft treats all
### 4.6: Use Azure RBAC to control access to resources
-**Guidance**: Use Azure role-based access control (Azure RBAC) to control access to the Azure Function control plane (the Azure portal).
+**Guidance**: Use Azure role-based access control (Azure RBAC) to control access to the function app control plane (the Azure portal).
- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
@@ -628,7 +628,7 @@ Microsoft manages the underlying infrastructure for Azure Functions and has impl
### 4.9: Log and alert on changes to critical Azure resources
-**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production Azure Function apps as well as other critical or related resources.
+**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production function apps as well as other critical or related resources.
- [How to create alerts for Azure Activity Log events](../azure-monitor/platform/alerts-activity-log.md)
@@ -642,9 +642,9 @@ Microsoft manages the underlying infrastructure for Azure Functions and has impl
### 5.1: Run automated vulnerability scanning tools
-**Guidance**: Adopt a DevSecOps practice to ensure your Azure Functions applications are secure and remain as secure as possible throughout the duration of their life-cycle. DevSecOps incorporates your organization's security team and their capabilities into your DevOps practices making security a responsibility of everyone on the team.
+**Guidance**: Adopt a DevSecOps practice to ensure your function apps are secure and remain as secure as possible throughout the duration of their life-cycle. DevSecOps incorporates your organization's security team and their capabilities into your DevOps practices making security a responsibility of everyone on the team.
-In addition, follow recommendations from Azure Security Center to help secure your Azure Function apps.
+In addition, follow recommendations from Azure Security Center to help secure your function apps.
- [How to add continuous security validation to your CI/CD pipeline](/azure/devops/migrate/security-validation-cicd-pipeline?view=azure-devops)
@@ -824,9 +824,9 @@ Allowed resource types
### 6.13: Physically or logically segregate high risk applications
-**Guidance**: For sensitive or high risk Azure Function apps, implement separate subscriptions and/or management groups to provide isolation.
+**Guidance**: For sensitive or high risk function apps, implement separate subscriptions and/or management groups to provide isolation.
-Deploy high risk Azure Function apps into their own Virtual Network (VNet). Perimeter security in Azure Functions is achieved through VNets. Functions running in the Premium plan or App Service Environment (ASE) can be integrated with VNets. Choose the best architecture for your use case.
+Deploy high risk function apps into their own Virtual Network (VNet). Perimeter security for function apps is achieved through VNets. Functions running in the Premium plan or App Service Environment (ASE) can be integrated with VNets. Choose the best architecture for your use case.
- [Azure Functions networking options](./functions-networking-options.md)
@@ -852,10 +852,10 @@ How to create an internal ASE:
### 7.1: Establish secure configurations for all Azure resources
-**Guidance**: Define and implement standard security configurations for your Azure Function app with Azure Policy. Use Azure Policy aliases in the "Microsoft.Web" namespace to create custom policies to audit or enforce the configuration of your Azure Functions apps. You may also make use of built-in policy definitions such as:
-- Managed identity should be used in your Function App-- Remote debugging should be turned off for Function Apps-- Function App should only be accessible over HTTPS
+**Guidance**: Define and implement standard security configurations for your function app with Azure Policy. Use Azure Policy aliases in the "Microsoft.Web" namespace to create custom policies to audit or enforce the configuration of your function apps. You may also make use of built-in policy definitions such as:
+- Managed identity should be used in your function app
+- Remote debugging should be turned off for function apps
+- Function app should only be accessible over HTTPS
- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
@@ -975,7 +975,7 @@ How to create an internal ASE:
### 7.12: Manage identities securely and automatically
-**Guidance**: Use Managed Identities to provide your Azure Function app with an automatically managed identity in Azure AD. Managed Identities allows you to authenticate to any service that supports Azure AD authentication, including Key Vault, without any credentials in your code.
+**Guidance**: Use Managed Identities to provide your function app with an automatically managed identity in Azure AD. Managed Identities allows you to authenticate to any service that supports Azure AD authentication, including Key Vault, without any credentials in your code.
- [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md)
@@ -1187,4 +1187,4 @@ Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
## Next steps - See the [Azure security benchmark](../security/benchmarks/overview.md)-- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)\ No newline at end of file
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/drawing-requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/drawing-requirements.md
@@ -3,7 +3,7 @@ title: Drawing package requirements in Microsoft Azure Maps Creator (Preview)
description: Learn about the Drawing package requirements to convert your facility design files to map data author: anastasia-ms ms.author: v-stharr
-ms.date: 12/07/2020
+ms.date: 1/08/2021
ms.topic: conceptual ms.service: azure-maps services: azure-maps
@@ -37,7 +37,7 @@ For easy reference, here are some terms and definitions that are important as yo
| Layer | An AutoCAD DWG layer.| | Level | An area of a building at a set elevation. For example, the floor of a building. | | Xref |A file in AutoCAD DWG file format (.dwg), attached to the primary drawing as an external reference. |
-| Feature | An object that combines a geometry with additional metadata information. |
+| Feature | An object that combines a geometry with more metadata information. |
| Feature classes | A common blueprint for features. For example, a *unit* is a feature class, and an *office* is a feature. | ## Drawing package structure
@@ -45,9 +45,9 @@ For easy reference, here are some terms and definitions that are important as yo
A Drawing package is a .zip archive that contains the following files: * DWG files in AutoCAD DWG file format.
-* A _manifest.json_ file for a single facility.
+* A _manifest.json_ file that describes the DWG files in the Drawing package.
-You can organize the DWG files in any way inside the folder, but the manifest file must live at the root directory of the folder. You must zip the folder in a single archive file, with a .zip extension. The next sections detail the requirements for the DWG files, the manifest file, and the content of these files.
+The Drawing package must be zipped into a single archive file, with the .zip extension. The DWG files can be organized in any way inside the package, but the manifest file must live at the root directory of the zipped package. The next sections detail the requirements for the DWG files, manifest file, and the content of these files.
## DWG files requirements
@@ -56,6 +56,7 @@ A single DWG file is required for each level of the facility. The level's data m
* Must define the _Exterior_ and _Unit_ layers. It can optionally define the following optional layers: _Wall_, _Door_, _UnitLabel_, _Zone_, and _ZoneLabel_. * Must not contain features from multiple levels. * Must not contain features from multiple facilities.
+* Must reference the same measurement system and unit of measurement as other DWG files in the Drawing package.
The [Azure Maps Conversion service](/rest/api/maps/conversion) can extract the following feature classes from a DWG file:
@@ -74,19 +75,19 @@ DWG layers must also follow the following criteria:
* The origins of drawings for all DWG files must align to the same latitude and longitude. * Each level must be in the same orientation as the other levels.
-* Self-intersecting polygons are automatically repaired, and the [Azure Maps Conversion service](/rest/api/maps/conversion) raises a warning. You should manually inspect the repaired results, because they might not match the expected results.
+* Self-intersecting polygons are automatically repaired, and the [Azure Maps Conversion service](/rest/api/maps/conversion) raises a warning. It's advisable to manually inspect the repaired results, because they might not match the expected results.
-All layer entities must be one of the following types: Line, PolyLine, Polygon, Circular Arc, Circle, or Text (single line). Any other entity types are ignored.
+All layer entities must be one of the following types: Line, PolyLine, Polygon, Circular Arc, Circle, Ellipse (closed), or Text (single line). Any other entity types are ignored.
-The following table outlines the supported entity types and supported features for each layer. If a layer contains unsupported entity types, then the [Azure Maps Conversion service](/rest/api/maps/conversion) ignores these entities.
+The table below outlines the supported entity types and converted map features for each layer. If a layer contains unsupported entity types, then the [Azure Maps Conversion service](/rest/api/maps/conversion) ignores these entities.
-| Layer | Entity types | Features |
+| Layer | Entity types | Converted Features |
| :----- | :-------------------| :-------
-| [Exterior](#exterior-layer) | Polygon, PolyLine (closed), Circle | Levels
-| [Unit](#unit-layer) | Polygon, PolyLine (closed), Circle | Vertical penetrations, Units
-| [Wall](#wall-layer) | Polygon, PolyLine (closed), Circle | Not applicable. For more information, see the [Wall layer](#wall-layer).
+| [Exterior](#exterior-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Levels
+| [Unit](#unit-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Vertical penetrations, Unit
+| [Wall](#wall-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Not applicable. For more information, see the [Wall layer](#wall-layer).
| [Door](#door-layer) | Polygon, PolyLine, Line, CircularArc, Circle | Openings
-| [Zone](#zone-layer) | Polygon, PolyLine (closed), Circle | Zone
+| [Zone](#zone-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Zone
| [UnitLabel](#unitlabel-layer) | Text (single line) | Not applicable. This layer can only add properties to the unit features from the Units layer. For more information, see the [UnitLabel layer](#unitlabel-layer). | [ZoneLabel](#zonelabel-layer) | Text (single line) | Not applicable. This layer can only add properties to zone features from the ZonesLayer. For more information, see the [ZoneLabel layer](#zonelabel-layer).
@@ -98,8 +99,10 @@ The DWG file for each level must contain a layer to define that level's perimete
No matter how many entity drawings are in the exterior layer, the [resulting facility dataset](tutorial-creator-indoor-maps.md#create-a-feature-stateset) will contain only one level feature for each DWG file. Additionally:
-* Exteriors must be drawn as Polygon, PolyLine (closed), or Circle.
-* Exteriors can overlap, but are dissolved into one geometry.
+* Exteriors must be drawn as Polygon, PolyLine (closed), Circle, or Ellipse (closed).
+* Exteriors may overlap, but are dissolved into one geometry.
+* Resulting level feature must be at least 4 square meters.
+* Resulting level feature must not be greater 400 square meters.
If the layer contains multiple overlapping PolyLines, the PolyLines are dissolved into a single Level feature. Alternatively, if the layer contains multiple non-overlapping PolyLines, the resulting Level feature has a multi-polygonal representation.
@@ -107,9 +110,11 @@ You can see an example of the Exterior layer as the outline layer in the [sample
### Unit layer
-The DWG file for each level defines a layer containing units. Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. The Units layer should adhere to the following requirements:
+The DWG file for each level defines a layer containing units. Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. If the `VerticalPenetrationCategory` property is defined, navigable units that span multiple levels, such as elevators and stairs, are converted to Vertical Penetration features. Vertical penetration features that overlap each other are assigned one `setid`.
-* Units must be drawn as Polygon, PolyLine (closed), or Circle.
+The Units layer should adhere to the following requirements:
+
+* Units must be drawn as Polygon, PolyLine (closed), Circle, or Ellipse (closed).
* Units must fall inside the bounds of the facility exterior perimeter. * Units must not partially overlap. * Units must not contain any self-intersecting geometry.
@@ -122,7 +127,7 @@ You can see an example of the Units layer in the [sample Drawing package](https:
The DWG file for each level can contain a layer that defines the physical extents of walls, columns, and other building structure.
-* Walls must be drawn as Polygon, PolyLine (closed), or Circle.
+* Walls must be drawn as Polygon, PolyLine (closed), Circle, or Ellipse (closed).
* The wall layer or layers should only contain geometry that's interpreted as building structure. You can see an example of the Walls layer in the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
@@ -137,9 +142,9 @@ Door openings in an Azure Maps dataset are represented as a single-line segment
### Zone layer
-The DWG file for each level can contain a Zone layer that defines the physical extents of zones. A zone can be an indoor empty space or a back yard.
+The DWG file for each level can contain a Zone layer that defines the physical extents of zones. A zone is a non-navigable space that can be named and rendered. Zones can span multiple levels and are grouped together using the zoneSetId property.
-* Zones must be drawn as Polygon, PolyLine (closed), or Circle.
+* Zones must be drawn as Polygon, PolyLine (closed), or Ellipse (closed).
* Zones can overlap. * Zones can fall inside or outside the facility's exterior perimeter.
@@ -149,7 +154,7 @@ You can see an example of the Zone layer in the [sample Drawing package](https:/
### UnitLabel layer
-The DWG file for each level can contain a UnitLabel layer. The UnitLabel layer adds a name property to units extracted from the Unit layer. Units with a name property can have additional details specified in the manifest file.
+The DWG file for each level can contain a UnitLabel layer. The UnitLabel layer adds a name property to units extracted from the Unit layer. Units with a name property can have more details specified in the manifest file.
* Unit labels must be single-line text entities. * Unit labels must fall inside the bounds of their unit.
@@ -159,7 +164,7 @@ You can see an example of the UnitLabel layer in the [sample Drawing package](ht
### ZoneLabel layer
-The DWG file for each level can contain a ZoneLabel layer. This layer adds a name property to zones extracted from the Zone layer. Zones with a name property can have additional details specified in the manifest file.
+The DWG file for each level can contain a ZoneLabel layer. This layer adds a name property to zones extracted from the Zone layer. Zones with a name property can have more details specified in the manifest file.
* Zones labels must be single-line text entities. * Zones labels must fall inside the bounds of their zone.
@@ -182,8 +187,8 @@ Although there are requirements when you use the manifest objects, not all objec
| `buildingLevels` | true | Specifies the levels of the buildings and the files containing the design of the levels. | | `georeference` | true | Contains numerical geographic information for the facility drawing. | | `dwgLayers` | true | Lists the names of the layers, and each layer lists the names of its own features. |
-| `unitProperties` | false | Can be used to insert additional metadata for the unit features. |
-| `zoneProperties` | false | Can be used to insert additional metadata for the zone features. |
+| `unitProperties` | false | Can be used to insert more metadata for the unit features. |
+| `zoneProperties` | false | Can be used to insert more metadata for the zone features. |
The next sections detail the requirements for each object.
@@ -256,7 +261,7 @@ The `unitProperties` object contains a JSON array of unit properties.
|`verticalPenetrationDirection`| string| false |If `verticalPenetrationCategory` is defined, optionally define the valid direction of travel. The permitted values are: `lowToHigh`, `highToLow`, `both`, and `closed`. The default value is `both`.| | `nonPublic` | bool | false | Indicates if the unit is open to the public. | | `isRoutable` | bool | false | When this property is set to `false`, you can't go to or through the unit. The default value is `true`. |
-| `isOpenArea` | bool | false | Allows the navigating agent to enter the unit without the need for an opening attached to the unit. By default, this value is set to `true` for units with no openings, and `false` for units with openings. Manually setting `isOpenArea` to `false` on a unit with no openings results in a warning. This is because the resulting unit won't be reachable by a navigating agent.|
+| `isOpenArea` | bool | false | Allows the navigating agent to enter the unit without the need for an opening attached to the unit. By default, this value is set to `true` for units with no openings, and `false` for units with openings. Manually setting `isOpenArea` to `false` on a unit with no openings results in a warning, because the resulting unit won't be reachable by a navigating agent.|
### `zoneProperties`
@@ -272,7 +277,7 @@ The `zoneProperties` object contains a JSON array of zone properties.
### Sample Drawing package manifest
-The following is a sample manifest file for the sample Drawing package. To download the entire package, see [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+Below is the manifest file for the sample Drawing package. To download the entire package, see [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
#### Manifest file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-livedata-setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/container-insights-livedata-setup.md
@@ -2,7 +2,7 @@
title: Set up Azure Monitor for containers Live Data (preview) | Microsoft Docs description: This article describes how to set up the real-time view of container logs (stdout/stderr) and events without using kubectl with Azure Monitor for containers. ms.topic: conceptual
-ms.date: 02/14/2019
+ms.date: 01/08/2020
ms.custom: references_regions ---
@@ -24,8 +24,6 @@ This article explains how to configure authentication to control access to the L
- Kubernetes role-based access control (Kubernetes RBAC) enabled AKS cluster - Azure Active Directory integrated AKS cluster.
->[!NOTE]
->AKS clusters enabled as [private clusters](https://azure.microsoft.com/updates/aks-private-cluster/) are not supported with this feature. This feature relies on directly accessing the Kubernetes API through a proxy server from your browser. Enabling networking security to block the Kubernetes API from this proxy will block this traffic.
## Authentication model
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/agent-windows-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/agent-windows-troubleshoot.md
@@ -18,6 +18,40 @@ If none of these steps work for you, the following support channels are also ava
* Customers with Azure support agreements can open a support request [in the Azure portal](https://manage.windowsazure.com/?getsupport=true). * Visit the Log Analytics Feedback page to review submitted ideas and bugs [https://aka.ms/opinsightsfeedback](https://aka.ms/opinsightsfeedback) or file a new one.
+## Log Analytics Troubleshooting Tool
+
+The Log Analytics Agent Windows Troubleshooting Tool is a collection of PowerShell scripts designed to help find and diagnose issues with the Log Analytics Agent. It is automatically included with the agent upon installation. Running the tool should be the first step in diagnosing an issue.
+
+### How to use
+1. Open PowerShell prompt as Administrator on the machine where Log Analytics Agent is installed.
+1. Navigate to the directory where the tool is located.
+ * `cd "C:\Program Files\Microsoft Monitoring Agent\Agent\Troubleshooter"`
+1. Execute the main script using this command:
+ * `.\GetAgentInfo.ps1`
+1. Select a troubleshooting scenario.
+1. Follow instructions on the console. (Note: trace logs steps requires manual intervention to stop log collection. Based upon the reproducibility of the issue, wait for the time duration and press 's' to stop log collection and proceed to the next step).
+
+ Locations of the results file is logged upon completion and a new explorer window highlighting it is opened.
+
+### Installation
+The Troubleshooting Tool is automatically included upon installation of the Log Analytics Agent build 10.20.18053.0 and onwards.
+
+### Scenarios covered
+Below is a list of scenarios checked by the Troubleshooting Tool:
+
+- Agent not reporting data or heartbeat data missing
+- Agent extension deployment failing
+- Agent crashing
+- Agent consuming high CPU/memory
+- Installation/uninstallation failures
+- Custom logs issue
+- OMS Gateway issue
+- Performance counters issue
+- Collect all logs
+
+>[!NOTE]
+>Please run the Troubleshooting tool when you experience an issue. When opening a ticket, having the logs initially will greatly help our support team troubleshoot your issue quicker.
+ ## Important troubleshooting sources To assist with troubleshooting issues related to Log Analytics agent for Windows, the agent logs events to the Windows Event Log, specifically under *Application and Services\Operations Manager*.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/azure-data-explorer-monitor-proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/azure-data-explorer-monitor-proxy.md
@@ -1,7 +1,7 @@
--- title: Query data in Azure Monitor using Azure Data Explorer (preview) description: Use Azure Data Explorer to perform cross product queries between Azure Data Explorer, Log Analytics workspaces and classic Application Insights applications in Azure Monitor.
-author: orens
+author: osalzberg
ms.author: bwren ms.reviewer: bwren ms.subservice: logs
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/azure-data-explorer-query-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/azure-data-explorer-query-storage.md
@@ -2,7 +2,7 @@
title: Query exported data from Azure Monitor using Azure Data Explorer (preview) description: Use Azure Data Explorer to query data that was exported from your Log Analytics workspace to an Azure storage account. ms.subservice: logs
-author: orens
+author: osalzberg
ms.author: bwren ms.reviewer: bwren ms.topic: conceptual
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/azure-monitor-data-explorer-proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/azure-monitor-data-explorer-proxy.md
@@ -1,7 +1,7 @@
--- title: Cross-resource query Azure Data Explorer by using Azure Monitor description: Use Azure Monitor to perform cross-product queries between Azure Data Explorer, Log Analytics workspaces, and classic Application Insights applications in Azure Monitor.
-author: orens
+author: osalzberg
ms.author: bwren ms.reviewer: bwren ms.subservice: logs
@@ -17,7 +17,7 @@ The following diagram shows the Azure Monitor cross-service flow:
:::image type="content" source="media\azure-data-explorer-monitor-proxy\azure-monitor-data-explorer-flow.png" alt-text="Diagram that shows the flow of queries between a user, Azure Monitor, a proxy, and Azure Data Explorer."::: >[!NOTE]
-> Azure Monitor cross-service query is in private preview. Allowlisting is required. Contact the [Service Team](mailto:ADXProxy@microsoft.com) with any questions.
+> Azure Monitor cross-service query is in public preview. Contact the [Service Team](mailto:ADXProxy@microsoft.com) with any questions.
## Cross-query your Log Analytics or Application Insights resources and Azure Data Explorer
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: rboucher ms.author: robb
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/samples/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/samples/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: bwren ms.author: bwren
azure-portal https://docs.microsoft.com/en-us/azure/azure-portal/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/17/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/custom-providers/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: jjbfour ms.author: jobreen
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-applications/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: tfitzmac ms.author: tomfitz
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resources-without-resource-group-limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
@@ -2,7 +2,7 @@
title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. ms.topic: conceptual
-ms.date: 10/28/2020
+ms.date: 01/08/2021
--- # Resources not limited to 800 instances per resource group
@@ -11,7 +11,6 @@ By default, you can deploy up to 800 instances of a resource type in each resour
For some resource types, you need to contact support to have the 800 instance limit removed. Those resource types are noted in this article. - ## Microsoft.Automation * automationAccounts
@@ -101,6 +100,11 @@ For some resource types, you need to contact support to have the 800 instance li
* softwareUpdateProfile * softwareUpdates
+## Microsoft.HybridCompute
+
+* machines - supports up to 5,000 instances
+* extensions - supports an unlimited number of VM extension instances
+ ## microsoft.insights * metricalerts
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: tfitzmac ms.author: tomfitz
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
author: sffamily ms.author: zhshang ms.service: signalr
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: stevestein ms.author: sstein
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: stevestein ms.author: sstein
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial.md
@@ -537,7 +537,7 @@ Repeat the steps on the other SQL Server VM.
### Tuning Failover Cluster Network Thresholds
-When running Windows Failover Cluster nodes in Azure Vms with SQL Server AlwaysOn, changing the cluster setting to a more relaxed monitoring state is recommended. This will make the cluster much more stable and reliable. For details on this, see [IaaS with SQL AlwaysOn - Tuning Failover Cluster Network Thresholds](/windows-server/troubleshoot/iaas-sql-failover-cluster).
+When running Windows Failover Cluster nodes in Azure Vms with SQL Server availability groups, change the cluster setting to a more relaxed monitoring state. This will make the cluster much more stable and reliable. For details on this, see [IaaS with SQL Server - Tuning Failover Cluster Network Thresholds](/windows-server/troubleshoot/iaas-sql-failover-cluster).
## <a name="endpoint-firewall"></a> Configure the firewall on each SQL Server VM
backup https://docs.microsoft.com/en-us/azure/backup/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
backup https://docs.microsoft.com/en-us/azure/backup/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: dcurwin ms.author: dacurwin
batch https://docs.microsoft.com/en-us/azure/batch/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
batch https://docs.microsoft.com/en-us/azure/batch/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: JnHs ms.author: jenhayes
blockchain https://docs.microsoft.com/en-us/azure/blockchain/templates/hyperledger-fabric-consortium-azure-kubernetes-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/templates/hyperledger-fabric-consortium-azure-kubernetes-service.md
@@ -1,7 +1,7 @@
--- title: Deploy Hyperledger Fabric consortium on Azure Kubernetes Service description: How to deploy and configure a Hyperledger Fabric consortium network on Azure Kubernetes Service
-ms.date: 08/06/2020
+ms.date: 01/08/2021
ms.topic: how-to ms.reviewer: ravastra ---
@@ -101,7 +101,7 @@ To get started with the deployment of Hyperledger Fabric network components, go
- **DNS prefix**: Enter a Domain Name System (DNS) name prefix for the AKS cluster. You'll use DNS to connect to the Kubernetes API when managing containers after you create the cluster. - **Node size**: For the size of the Kubernetes node, you can choose from the list of VM stock-keeping units (SKUs) available on Azure. For optimal performance, we recommend Standard DS3 v2. - **Node count**: Enter the number of Kubernetes nodes to be deployed in the cluster. We recommend keeping this node count equal to or more than the number of Hyperledger Fabric nodes specified on the **Fabric settings** tab.
- - **Service principal client ID**: Enter the client ID of an existing service principal or create a new one. A service principal is required for AKS authentication. See the [steps to create a service principal](/powershell/azure/create-azure-service-principal-azureps?view=azps-3.2.0#create-a-service-principal).
+ - **Service principal client ID**: Enter the client ID of an existing service principal or create a new one. A service principal is required for AKS authentication. See the [steps to create a service principal](/powershell/azure/create-azure-service-principal-azureps#create-a-service-principal).
- **Service principal client secret**: Enter the client secret of the service principal provided in the client ID for the service principal. - **Confirm client secret**: Confirm the client secret for the service principal. - **Enable container monitoring**: Choose to enable AKS monitoring, which enables the AKS logs to push to the specified Log Analytics workspace.
@@ -389,23 +389,35 @@ Pass the query function name and space-separated list of arguments inΓÇ»`<queryF
## Troubleshoot
-Run the following commands to find the version of your template deployment.
+### Find deployed version
-Set environment variables according to the resource group where the template has been deployed.
+Run the following commands to find the version of your template deployment. Set environment variables according to the resource group where the template has been deployed.
```bash
+SWITCH_TO_AKS_CLUSTER $AKS_CLUSTER_RESOURCE_GROUP $AKS_CLUSTER_NAME $AKS_CLUSTER_SUBSCRIPTION
+kubectl describe pod fabric-tools -n tools | grep "Image:" | cut -d ":" -f 3
+```
+
+### Patch previous version
-SWITCH_TO_AKS_CLUSTER() { az aks get-credentials --resource-group $1 --name $2 --subscription $3; }
-AKS_CLUSTER_SUBSCRIPTION=<AKSClusterSubscriptionID>
-AKS_CLUSTER_RESOURCE_GROUP=<AKSClusterResourceGroup>
-AKS_CLUSTER_NAME=<AKSClusterName>
+If you are facing issues with running chaincode on any deployments of template version below v3.0.0, then follow the below steps to patch your peer nodes with a fix.
+
+Download the peer deployment script.
+
+```bash
+curl https://raw.githubusercontent.com/Azure/Hyperledger-Fabric-on-Azure-Kubernetes-Service/master/scripts/patchPeerDeployment.sh -o patchPeerDeployment.sh; chmod 777 patchPeerDeployment.sh
```
-Run the following command to print the template version.
+
+Run the script using the following command replacing the parameters for your peer.
```bash
-SWITCH_TO_AKS_CLUSTER $AKS_CLUSTER_RESOURCE_GROUP $AKS_CLUSTER_NAME $AKS_CLUSTER_SUBSCRIPTION
-kubectl describe pod fabric-tools -n tools | grep "Image:" | cut -d ":" -f 3
+source patchPeerDeployment.sh <peerOrgSubscription> <peerOrgResourceGroup> <peerOrgAKSClusterName>
+```
+
+Wait for all your peer nodes to get patched. You can always check the status of your peer nodes, in different instance of the shell using the following command.
+```bash
+kubectl get pods -n hlf
``` ## Support and feedback
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/includes/quickstarts-sdk/csharp-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/csharp-sdk.md
@@ -43,7 +43,7 @@ Once you've created a new project, install the client library by right-clicking
#### [CLI](#tab/cli)
-In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `computer-vision-quickstart`. This command creates a simple "Hello World" C# project with a single source file: *ComputerVisionQuickstart.cs*.
+In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `computer-vision-quickstart`. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*.
```console dotnet new console -n (product-name)-quickstart
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/includes/luis-portal-note https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/includes/luis-portal-note.md
@@ -7,10 +7,10 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: include
-ms.date: 12/16/2020
+ms.date: 01/08/2021
--- > [!NOTE]
-> Starting January 4th, the regional portals (au.luis.ai and eu.luis.ai) will be consolidated into a single portal and URL. If you were using one of these portals, you will be automatically re-directed to [luis.ai](https://luis.ai/). You will continue using the same regional resources you created and your data will continue to be saved and processed in the same region as your resource.
\ No newline at end of file
+> Starting January 18th, the regional portals (au.luis.ai and eu.luis.ai) will be consolidated into a single portal and URL. If you were using one of these portals, you will be automatically re-directed to [luis.ai](https://luis.ai/). You will continue using the same regional resources you created and your data will continue to be saved and processed in the same region as your resource.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/whats-new.md
@@ -21,6 +21,8 @@ Learn what's new with QnA Maker.
### November 2020 * New version of QnA Maker launched in free Public Preview. Read more [here](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575).+
+> [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
* Simplified resource creation * End to End region support * Deep learnt ranking model
@@ -82,4 +84,4 @@ Learn what's new with QnA Maker.
## Cognitive Service updates
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
\ No newline at end of file
+[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/faq-stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
@@ -129,11 +129,11 @@ See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md).
**A**: Yes. You can transcribe it yourself or use a professional transcription service. Some users prefer professional transcribers and others use crowdsourcing or do the transcriptions themselves.
-**Q: How long will it take to train a custom model audio data?**
+**Q: How long will it take to train a custom model with audio data?**
**A**: Training a model with audio data is a lengthy process. Depending on the amount of data, it can take several days to create a custom model. If it cannot be finished within one week, the service might abort the training operation and report the model as failed. For faster results, use one of the [regions](custom-speech-overview.md#set-up-your-azure-account) where dedicated hardware is available for training. You can copy the fully trained model to another region using the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription). Training with just text is much faster and typically finishes within minutes.
-Some base models cannot be customized with audio data. For them the service will just use the text of the transcription for training and discard the audio data. Training will then be finished much faster and results will be the same as training with just text.
+Some base models cannot be customized with audio data. For them the service will just use the text of the transcription for training and ignore the audio data. Training will then be finished much faster and results will be the same as training with just text.
## Accuracy testing
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/rest-speech-to-text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: speech-service ms.topic: conceptual
-ms.date: 12/10/2020
+ms.date: 01/08/2021
ms.author: trbye ms.custom: devx-track-csharp ---
@@ -56,7 +56,7 @@ Before using the Speech-to-text REST API for short audio, consider the following
If sending longer audio is a requirement for your application, consider using the [Speech SDK](speech-sdk.md) or [Speech-to-text REST API v3.0](#speech-to-text-rest-api-v30). > [!TIP]
-> See the Azure government [documentation](../../azure-government/compare-azure-government-global-azure.md) for government cloud (FairFax) endpoints.
+> See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
[!INCLUDE [](../../../includes/cognitive-services-speech-service-rest-auth.md)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/rest-text-to-speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: speech-service ms.topic: conceptual
-ms.date: 03/23/2020
+ms.date: 01/08/2021
ms.author: trbye ms.custom: references_regions ---
@@ -30,7 +30,7 @@ Before using this API, understand:
* The text-to-speech REST API requires an Authorization header. This means that you need to complete a token exchange to access the service. For more information, see [Authentication](#authentication). > [!TIP]
-> See the [Azure government documentation](/azure/azure-government/compare-azure-government-global-azure) for government cloud (FairFax) endpoints.
+> See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
[!INCLUDE [](../../../includes/cognitive-services-speech-service-rest-auth.md)]
@@ -71,7 +71,10 @@ This table lists required and optional headers for text-to-speech requests.
| Header | Description | Required / Optional | |--------|-------------|---------------------|
-| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Required |
+| `Ocp-Apim-Subscription-Key` | Your Speech service subscription key. | Either this header or `Authorization` is required. |
+| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
++ ### Request body
@@ -85,7 +88,7 @@ This request only requires an authorization header.
GET /cognitiveservices/voices/list HTTP/1.1 Host: westus.tts.speech.microsoft.com
-Authorization: Bearer [Base64 access_token]
+Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY
``` ### Sample response
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/sovereign-clouds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/sovereign-clouds.md
@@ -31,9 +31,10 @@ Available to US government entities and their partners only. See more informatio
- Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation) - [Speech Studio](https://speech.azure.us/) - Text-to-speech
+ - Standard voice
+ - Neural voice
- Speech translator - **Unsupported features:**
- - Neural voice
- Custom Voice - **Supported languages:** - See the list of supported languages [here](language-support.md)
@@ -100,20 +101,13 @@ Available to organizations with a business presence in China. See more informati
- Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation) - [Speech Studio](https://speech.azure.cn/) - Text-to-speech
+ - Standard voice
+ - Neural voice
- Speech translator - **Unsupported features:**
- - Neural voice
- Custom Voice - **Supported languages:**
- - Arabic (ar-*)
- - Chinese (zh-*)
- - English (en-*)
- - French (fr-*)
- - German (de-*)
- - Hindi (hi-IN)
- - Korean (ko-KR)
- - Russian (ru-RU)
- - Spanish (es-*)
+ - See the list of supported languages [here](language-support.md)
### Endpoint information
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/overview.md
@@ -39,11 +39,11 @@ To try out the Form Recognizer Service, go to the online Sample UI Tool:
# [v2.0](#tab/v2-0) > [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://fott.azurewebsites.net/)
+> [Try Form Recognizer](https://fott.azurewebsites.net/)
# [v2.1 preview](#tab/v2-1) > [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://fott-preview.azurewebsites.net/)
+> [Try Form Recognizer](https://fott-preview.azurewebsites.net/)
---
@@ -149,7 +149,18 @@ Explore the [REST API reference documentation](https://westus2.dev.cognitive.mic
## Deploy on premises using Docker containers
-[Use Form Recognizer containers (preview)](form-recognizer-container-howto.md) to deploy API features on-premises. This Docker container enables you to bring the service closer to your data for compliance, security or other operational reasons.
+[Use Form Recognizer containers (preview)](form-recognizer-container-howto.md) to deploy API features on-premises. This Docker container enables you to bring the service closer to your data for compliance, security or other operational reasons.
+
+## Service availability and redundancy
+
+### Is Form Recognizer service zone-resilient?
+
+Yes. The Form Recognizer service is zone-resilient by default.
+
+### How do I configure the Form Recognizer service to be zone-resilient?
+
+No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Form Recognizer resources is available by default and managed by the service itself.
+ ## Data privacy and security
@@ -157,4 +168,4 @@ As with all the cognitive services, developers using the Form Recognizer service
## Next steps
-Complete a [quickstart](quickstarts/client-library.md) to get started writing a forms processing app with Form Recognizer in the language of your choice.
\ No newline at end of file
+Complete a [quickstart](quickstarts/client-library.md) to get started writing a forms processing app with Form Recognizer in the language of your choice.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
author: nitinme ms.author: nitinme ms.service: cognitive-services
communication-services https://docs.microsoft.com/en-us/azure/communication-services/samples/calling-hero-sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/calling-hero-sample.md
@@ -69,7 +69,7 @@ When we want to deploy locally we need to start up both applications. When the s
You can test the sample locally by opening multiple browser sessions with the URL of your call to simulate a multi-user call.
-## Before running the sample for the first time
+### Before running the sample for the first time
1. Open an instance of PowerShell, Windows Terminal, Command Prompt or equivalent and navigate to the directory that you'd like to clone the sample to. 2. `git clone https://github.com/Azure-Samples/communication-services-web-calling-hero.git`
@@ -113,4 +113,4 @@ For more information, see the following articles:
- [Redux](https://redux.js.org/) - Client-side state management - [FluentUI](https://aka.ms/fluent-ui) - Microsoft powered UI library - [React](https://reactjs.org/) - Library for building user interfaces-- [ASP.NET Core](/aspnet/core/introduction-to-aspnet-core?preserve-view=true&view=aspnetcore-3.1) - Framework for building web applications\ No newline at end of file
+- [ASP.NET Core](/aspnet/core/introduction-to-aspnet-core?preserve-view=true&view=aspnetcore-3.1) - Framework for building web applications
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-application-gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-application-gateway.md
@@ -96,6 +96,9 @@ ACI_IP=$(az container show \
--query ipAddress.ip --output tsv) ```
+> [!IMPORTANT]
+> If the container group is stopped, started, or restarted, the container groupÆs private IP is subject to change. If this happens, you will need to update the application gateway configuration.
+ ## Create application gateway Create an application gateway in the virtual network, following the steps in the [application gateway quickstart](../application-gateway/quick-create-cli.md). The following [az network application-gateway create][az-network-application-gateway-create] command creates a gateway with a public frontend IP address and a route to the backend container group. See the [Application Gateway documentation](../application-gateway/index.yml) for details about the gateway settings.
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-dedicated-hosts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-dedicated-hosts.md
@@ -15,6 +15,9 @@ The dedicated sku is appropriate for container workloads that require workload i
## Prerequisites
+> [!NOTE]
+> Due to some current limitations, not all limit increase requests are guaranteed to be approved.
+ * The default limit for any subscription to use the dedicated sku is 0. If you would like to use this sku for your production container deployments, create an [Azure Support request][azure-support] to increase the limit. ## Use the dedicated sku
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-virtual-network-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-virtual-network-concepts.md
@@ -28,6 +28,7 @@ Container groups deployed into an Azure virtual network enable scenarios like:
* **Azure Load Balancer** - Placing an Azure Load Balancer in front of container instances in a networked container group is not supported * **Global virtual network peering** - Global peering (connecting virtual networks across Azure regions) is not supported * **Public IP or DNS label** - Container groups deployed to a virtual network don't currently support exposing containers directly to the internet with a public IP address or a fully qualified domain name
+* **Virtual Network NAT** - Container groups deployed to a virtual network don't currently support using a NAT gateway resource for outbound internet connectivity.
## Other limitations
container-registry https://docs.microsoft.com/en-us/azure/container-registry/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: dlepow ms.author: danlep
container-registry https://docs.microsoft.com/en-us/azure/container-registry/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: dlepow ms.author: danlep
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cosmosdb-monitor-resource-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-monitor-resource-logs.md
@@ -5,7 +5,7 @@ author: SnehaGunda
services: cosmos-db ms.service: cosmos-db ms.topic: how-to
-ms.date: 10/28/2020
+ms.date: 01/06/2021
ms.author: sngun ---
@@ -46,6 +46,12 @@ Platform metrics and the Activity logs are collected automatically, whereas you
{ "time": "2020-03-30T23:55:10.9579593Z", "resourceId": "/SUBSCRIPTIONS/<your_subscription_ID>/RESOURCEGROUPS/<your_resource_group>/PROVIDERS/MICROSOFT.DOCUMENTDB/DATABASEACCOUNTS/<your_database_account>", "category": "CassandraRequests", "operationName": "QuerySelect", "properties": {"activityId": "6b33771c-baec-408a-b305-3127c17465b6","opCode": "<empty>","errorCode": "-1","duration": "0.311900","requestCharge": "1.589237","databaseName": "system","collectionName": "local","retryCount": "<empty>","authorizationTokenType": "PrimaryMasterKey","address": "104.42.195.92","piiCommandText": "{"request":"SELECT key from system.local"}","userAgent": """"}} ```
+* **GremlinRequests**: Select this option to log user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Gremlin. This log type is not available for other API accounts. The key properties to note are `operationName` and `requestCharge`. When you enable GremlinRequests in diagnostics logs, make sure to turn off the DataPlaneRequests. You would see one log for every request made on the API.
+
+ ```json
+ { "time": "2021-01-06T19:36:58.2554534Z", "resourceId": "/SUBSCRIPTIONS/<your_subscription_ID>/RESOURCEGROUPS/<your_resource_group>/PROVIDERS/MICROSOFT.DOCUMENTDB/DATABASEACCOUNTS/<your_database_account>", "category": "GremlinRequests", "operationName": "eval", "properties": {"activityId": "b16bd876-0e5c-4448-90d1-7f3134c6b5ff", "errorCode": "200", "duration": "9.6036", "requestCharge": "9.059999999999999", "databaseName": "GraphDemoDatabase", "collectionName": "GraphDemoContainer", "authorizationTokenType": "PrimaryMasterKey", "address": "98.225.2.189", "estimatedDelayFromRateLimitingInMilliseconds": "0", "retriedDueToRateLimiting": "False", "region": "Australia East", "requestLength": "266", "responseLength": "364", "userAgent": "<empty>"}}
+ ```
+ * **QueryRuntimeStatistics**: Select this option to log the query text that was executed. This log type is available for SQL API accounts only. ```json
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
@@ -22,6 +22,10 @@ This article shows the Azure Cosmos DB Emulator release notes with a list of fea
## Release notes
+### 2.11.10 (5 January 2021)
+
+ - This release updates the local Data Explorer content to latest Azure Portal version and adds a new public option, "/ExportPemCert", which allows the emulator user to directly export the public emulator's certificate as a .PEM file.
+ ### 2.11.9 (3 December 2020) - This release addresses couple issues with the Azure Cosmos DB Emulator functionality in addition to the general content update reflecting the latest features and improvements in Azure Cosmos DB:
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-indexing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-indexing.md
@@ -5,7 +5,7 @@ ms.service: cosmos-db
ms.subservice: cosmosdb-mongo ms.devlang: nodejs ms.topic: how-to
-ms.date: 11/06/2020
+ms.date: 01/08/2020
author: timsander1 ms.author: tisande ms.custom: devx-track-js
@@ -23,6 +23,16 @@ To index additional fields, you apply the MongoDB index-management commands. As
To apply a sort to a query, you must create an index on the fields used in the sort operation.
+### Editing indexing policy
+
+We recommend editing your indexing policy in the Data Explorer within the Azure portal.
+. You can add single field and wildcard indexes from the indexing policy editor in the Data Explorer:
+
+:::image type="content" source="./media/mongodb-indexing/indexing-policy-editor.png" alt-text="Indexing policy editor":::
+
+> [!NOTE]
+> You can't create compound indexes using the indexing policy editor in the Data Explorer.
+ ## Index types ### Single field
@@ -31,6 +41,10 @@ You can create indexes on any single field. The sort order of the single field i
`db.coll.createIndex({name:1})`
+You could create the same single field index on `name` in the Azure portal:
+
+:::image type="content" source="./media/mongodb-indexing/add-index.png" alt-text="Add name index in indexing policy editor":::
+ One query uses multiple single field indexes where available. You can create up to 500 single field indexes per container. ### Compound indexes (MongoDB server version 3.6)
@@ -129,6 +143,10 @@ Here's how you can create a wildcard index on all fields:
`db.coll.createIndex( { "$**" : 1 } )`
+You can also create wildcard indexes using the Data Explorer in the Azure portal:
+
+:::image type="content" source="./media/mongodb-indexing/add-wildcard-index.png" alt-text="Add wildcard index in indexing policy editor":::
+ > [!NOTE] > If you are just starting development, we **strongly** recommend starting off with a wildcard index on all fields. This can simplify development and make it easier to optimize queries.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/monitor-cosmos-db-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/monitor-cosmos-db-reference.md
@@ -76,7 +76,7 @@ The following table lists the properties of resource logs in Azure Cosmos DB. Th
| --- | --- | --- | | **time** | **TimeGenerated** | The date and time (UTC) when the operation occurred. | | **resourceId** | **Resource** | The Azure Cosmos DB account for which logs are enabled.|
-| **category** | **Category** | For Azure Cosmos DB, **DataPlaneRequests**, **MongoRequests**, **QueryRuntimeStatistics**, **PartitionKeyStatistics**, **PartitionKeyRUConsumption**, **ControlPlaneRequests** are the available log types. |
+| **category** | **Category** | For Azure Cosmos DB, **DataPlaneRequests**, **MongoRequests**, **QueryRuntimeStatistics**, **PartitionKeyStatistics**, **PartitionKeyRUConsumption**, **ControlPlaneRequests**, **CassandraRequests**, **GremlinRequests** are the available log types. |
| **operationName** | **OperationName** | Name of the operation. The operation name can be `Create`, `Update`, `Read`, `ReadFeed`, `Delete`, `Replace`, `Execute`, `SqlQuery`, `Query`, `JSQuery`, `Head`, `HeadFeed`, or `Upsert`. | | **properties** | n/a | The contents of this field are described in the rows that follow. | | **activityId** | **activityId_g** | The unique GUID for the logged operation. |
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: SnehaGunda ms.author: sngun
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/postgres-migrate-cosmos-db-kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/postgres-migrate-cosmos-db-kafka.md new file mode 100644
@@ -0,0 +1,263 @@
+---
+title: Migrate data from PostgreSQL to Azure Cosmos DB Cassandra API account using Apache Kafka
+description: Learn how to use Kafka Connect to synchronize data from PostgreSQL to Azure Cosmos DB Cassandra API in real time.
+author: abhirockzz
+ms.service: cosmos-db
+ms.subservice: cosmosdb-cassandra
+ms.topic: how-to
+ms.date: 01/05/2021
+ms.author: abhishgu
+ms.reviewer: abhishgu
+---
+
+# Migrate data from PostgreSQL to Azure Cosmos DB Cassandra API account using Apache Kafka
+[!INCLUDE[appliesto-cassandra-api](includes/appliesto-cassandra-api.md)]
+
+Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for various reasons such as:
+
+* **Significant cost savings:** You can save cost with Azure Cosmos DB, which includes the cost of VMΓÇÖs, bandwidth, and any applicable Oracle licenses. Additionally, you donΓÇÖt have to manage the data centers, servers, SSD storage, networking, and electricity costs.
+
+* **Better scalability and availability:** It eliminates single points of failure, better scalability, and availability for your applications.
+
+* **No overhead of managing and monitoring:** As a fully managed cloud service, Azure Cosmos DB removes the overhead of managing and monitoring a myriad of settings.
+
+[Kafka Connect](https://kafka.apache.org/documentation/#connect) is a platform to stream data between [Apache Kafka](https://kafka.apache.org/) and other systems in a scalable and reliable manner. It supports several off the shelf connectors, which means that you don't need custom code to integrate external systems with Apache Kafka.
+
+This article will demonstrate how to use a combination of Kafka connectors to set up a data pipeline to continuously synchronize records from a relational database such as [PostgreSQL](https://www.postgresql.org/) to [Azure Cosmos DB Cassandra API](cassandra-introduction.md).
+
+## Overview
+
+Here is high-level overview of the end to end flow presented in this article.
+
+Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html), which is a Kafka Connect **source** connector. Inserts, updates, or deletion to records in the PostgreSQL table will be captured as `change data` events and sent to Kafka topic(s). The [DataStax Apache Kafka connector](https://docs.datastax.com/en/kafka/doc/kafka/kafkaIntro.html) (Kafka Connect **sink** connector), forms the second part of the pipeline. It will synchronize the change data events from Kafka topic to Azure Cosmos DB Cassandra API tables.
+
+> [!NOTE]
+> Using specific features of the DataStax Apache Kafka connector allows us to push data to multiple tables. In this example, the connector will help us persist change data records to two Cassandra tables that can support different query requirements.
+
+## Prerequisites
+
+* [Provision an Azure Cosmos DB Cassandra API account](create-cassandra-dotnet.md#create-a-database-account)
+* [Use cqlsh or hosted shell for validation](cassandra-support.md#hosted-cql-shell-preview)
+* JDK 8 or above
+* [Docker](https://www.docker.com/) (optional)
+
+## Base setup
+
+### Set up PostgreSQL database if you haven't already.
+
+This could be an existing on-premise database or you could [download and install one](https://www.postgresql.org/download/) on your local machine. It's also possible to use a [Docker container](https://hub.docker.com/_/postgres).
+
+To start a container:
+
+```bash
+docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=<enter password> postgres
+```
+
+Connect to your PostgreSQL instance using [`psql`](https://www.postgresql.org/docs/current/app-psql.html) client:
+
+```bash
+psql -h localhost -p 5432 -U postgres -W -d postgres
+```
+
+Create a table to store sample order information:
+
+```sql
+CREATE SCHEMA retail;
+
+CREATE TABLE retail.orders_info (
+ orderid SERIAL NOT NULL PRIMARY KEY,
+ custid INTEGER NOT NULL,
+ amount INTEGER NOT NULL,
+ city VARCHAR(255) NOT NULL,
+ purchase_time VARCHAR(40) NOT NULL
+);
+```
+
+### Using the Azure portal, create the Cassandra Keyspace and the tables required for the demo application.
+
+> [!NOTE]
+> Use the same Keyspace and table names as below
+
+```sql
+CREATE KEYSPACE retail WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'datacenter1' : 1};
+
+CREATE TABLE retail.orders_by_customer (order_id int, customer_id int, purchase_amount int, city text, purchase_time timestamp, PRIMARY KEY (customer_id, purchase_time)) WITH CLUSTERING ORDER BY (purchase_time DESC) AND cosmosdb_cell_level_timestamp=true AND cosmosdb_cell_level_timestamp_tombstones=true AND cosmosdb_cell_level_timetolive=true;
+
+CREATE TABLE retail.orders_by_city (order_id int, customer_id int, purchase_amount int, city text, purchase_time timestamp, PRIMARY KEY (city,order_id)) WITH cosmosdb_cell_level_timestamp=true AND cosmosdb_cell_level_timestamp_tombstones=true AND cosmosdb_cell_level_timetolive=true;
+```
+
+### Setup Apache Kafka
+
+This article uses a local cluster, but you can choose any other option. [Download Kafka](https://kafka.apache.org/downloads), unzip it, start the Zookeeper and Kafka cluster.
+
+```bash
+cd <KAFKA_HOME>/bin
+
+#start zookeeper
+bin/zookeeper-server-start.sh config/zookeeper.properties
+
+#start kafka (in another terminal)
+bin/kafka-server-start.sh config/server.properties
+```
+
+### Setup connectors
+
+Install the Debezium PostgreSQL and DataStax Apache Kafka connector. Download the Debezium PostgreSQL connector plug-in archive. For example, to download version 1.3.0 of the connector (latest at the time of writing), use [this link](https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.0.Final/debezium-connector-postgres-1.2.0.Final-plugin.tar.gz). Download the DataStax Apache Kafka connector from [this link](https://downloads.datastax.com/#akc).
+
+Unzip both the connector archives and copy the JAR files to the [Kafka Connect plugin.path](https://kafka.apache.org/documentation/#connectconfigs).
++
+```bash
+cp <path_to_debezium_connector>/*.jar <KAFKA_HOME>/libs
+cp <path_to_cassandra_connector>/*.jar <KAFKA_HOME>/libs
+```
+
+> For details, please refer to the [Debezium](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html#postgresql-deploying-a-connector) and [DataStax](https://docs.datastax.com/en/kafka/doc/) documentation.
+
+## Configure Kafka Connect and start data pipeline
+
+### Start Kafka Connect cluster
+
+```bash
+cd <KAFKA_HOME>/bin
+./connect-distributed.sh ../config/connect-distributed.properties
+```
+
+### Start PostgreSQL connector instance
+
+Save the connector configuration (JSON) to a file example `pg-source-config.json`
+
+```json
+{
+ "name": "pg-orders-source",
+ "config": {
+ "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
+ "database.hostname": "localhost",
+ "database.port": "5432",
+ "database.user": "postgres",
+ "database.password": "password",
+ "database.dbname": "postgres",
+ "database.server.name": "myserver",
+ "plugin.name": "wal2json",
+ "table.include.list": "retail.orders_info",
+ "value.converter": "org.apache.kafka.connect.json.JsonConverter"
+ }
+}
+```
+
+To start the PostgreSQL connector instance:
+
+```bash
+curl -X POST -H "Content-Type: application/json" --data @pg-source-config.json http://localhost:8083/connectors
+```
+
+> [!NOTE]
+> To delete, you can use: `curl -X DELETE http://localhost:8083/connectors/pg-orders-source`
++
+### Insert data
+
+The `orders_info` table contains order details such as order ID, customer ID, city etc. Populate the table with random data using the below SQL.
+
+```sql
+insert into retail.orders_info (
+ custid, amount, city, purchase_time
+)
+select
+ random() * 10000 + 1,
+ random() * 200,
+ ('{New Delhi,Seattle,New York,Austin,Chicago,Cleveland}'::text[])[ceil(random()*3)],
+ NOW() + (random() * (interval '1 min'))
+from generate_series(1, 10) s(i);
+```
+
+It should insert 10 records into the table. Be sure to update the number of records in `generate_series(1, 10)` below as per your requirements example, to insert `100` records, use `generate_series(1, 100)`
+
+To confirm:
+
+```bash
+select * from retail.orders_info;
+```
+
+Check the change data capture events in the Kafka topic
+
+> [!NOTE]
+> Note that the topic name is `myserver.retail.orders_info` which as per the [connector convention](https://debezium.io/documentation/reference/1.3/connectors/postgresql.html#postgresql-topic-names)
+
+```bash
+cd <KAFKA_HOME>/bin
+
+./kafka-console-consumer.sh --topic myserver.retail.orders_info --bootstrap-server localhost:9092 --from-beginning
+```
+
+You should see the change data events in JSON format.
+
+### Start DataStax Apache Kafka connector instance
+
+Save the connector configuration (JSON) to a file example, `cassandra-sink-config.json` and update the properties as per your environment.
+
+```json
+{
+ "name": "kafka-cosmosdb-sink",
+ "config": {
+ "connector.class": "com.datastax.oss.kafka.sink.CassandraSinkConnector",
+ "tasks.max": "1",
+ "topics": "myserver.retail.orders_info",
+ "contactPoints": "<Azure Cosmos DB account name>.cassandra.cosmos.azure.com",
+ "loadBalancing.localDc": "<Azure Cosmos DB region e.g. Southeast Asia>",
+ "datastax-java-driver.advanced.connection.init-query-timeout": 5000,
+ "ssl.hostnameValidation": true,
+ "ssl.provider": "JDK",
+ "ssl.keystore.path": "<path to JDK keystore path e.g. <JAVA_HOME>/jre/lib/security/cacerts>",
+ "ssl.keystore.password": "<keystore password: it is 'changeit' by default>",
+ "port": 10350,
+ "maxConcurrentRequests": 500,
+ "maxNumberOfRecordsInBatch": 32,
+ "queryExecutionTimeout": 30,
+ "connectionPoolLocalSize": 4,
+ "auth.username": "<Azure Cosmos DB user name (same as account name)>",
+ "auth.password": "<Azure Cosmos DB password>",
+ "topic.myserver.retail.orders_info.retail.orders_by_customer.mapping": "order_id=value.orderid, customer_id=value.custid, purchase_amount=value.amount, city=value.city, purchase_time=value.purchase_time",
+ "topic.myserver.retail.orders_info.retail.orders_by_city.mapping": "order_id=value.orderid, customer_id=value.custid, purchase_amount=value.amount, city=value.city, purchase_time=value.purchase_time",
+ "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+ "transforms": "unwrap",
+ "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
+ "offset.flush.interval.ms": 10000
+ }
+}
+```
+
+To start the connector instance:
+
+```bash
+curl -X POST -H "Content-Type: application/json" --data @cassandra-sink-config.json http://localhost:8083/connectors
+```
+
+The connector should spring into action and the end to end pipeline from PostgreSQL to Azure Cosmos DB will be operational.
+
+### Query Azure Cosmos DB
+
+Check the Cassandra tables in Azure Cosmos DB. Here are some of the queries you can try:
+
+```sql
+select count(*) from retail.orders_by_customer;
+select count(*) from retail.orders_by_city;
+
+select * from retail.orders_by_customer;
+select * from retail.orders_by_city;
+
+select * from retail.orders_by_city where city='Seattle';
+select * from retail.orders_by_customer where customer_id = 10;
+```
+
+You can continue to insert more data into PostgreSQL and confirm that the records are synchronized to Azure Cosmos DB.
+
+## Next steps
+
+* [Integrate Apache Kafka and Azure Cosmos DB Cassandra API using Kafka Connect](cassandra-kafka-connect.md)
+* [Integrate Apache Kafka Connect on Azure Event Hubs (Preview) with Debezium for Change Data Capture](../event-hubs/event-hubs-kafka-connect-debezium.md)
+* [Migrate data from Oracle to Azure Cosmos DB Cassandra API using Blitzz](oracle-migrate-cosmos-db-blitzz.md)
+* [Provision throughput on containers and databases](set-throughput.md)
+* [Partition key best practices](partitioning-overview.md#choose-partitionkey)
+* [Estimate RU/s using the Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) articles
+
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: SnehaGunda ms.author: sngun
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/serverless.md
@@ -5,15 +5,12 @@ author: ThomasWeiss
ms.author: thweiss ms.service: cosmos-db ms.topic: conceptual
-ms.date: 12/23/2020
+ms.date: 01/08/2021
--- # Azure Cosmos DB serverless (Preview) [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-> [!IMPORTANT]
-> Azure Cosmos DB serverless is currently in preview. This preview version is provided without a Service Level Agreement and is not recommended for production workloads. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Azure Cosmos DB serverless lets you use your Azure Cosmos account in a consumption-based fashion where you are only charged for the Request Units consumed by your database operations and the storage consumed by your data. Serverless containers can serve thousands of requests per second with no minimum charge and no capacity planning required. > [!IMPORTANT]
@@ -26,16 +23,13 @@ When using Azure Cosmos DB, every database operation has a cost expressed in [Re
## Use-cases
-Azure Cosmos DB serverless best fits scenarios where you expect:
--- **Low, intermittent and unpredictable traffic**: Because provisioning capacity in such situations isn't required and may be cost-prohibitive-- **Moderate performance**: Because serverless containers have [specific performance characteristics](#performance)-
-For these reasons, Azure Cosmos DB serverless should be considered in the following situations:
+Azure Cosmos DB serverless best fits scenarios where you expect **intermittent and unpredictable traffic** with long idle times. Because provisioning capacity in such situations isn't required and may be cost-prohibitive, Azure Cosmos DB serverless should be considered in the following use-cases:
- Getting started with Azure Cosmos DB-- Development, testing and prototyping of new applications-- Running small-to-medium applications with intermittent traffic that is hard to forecast
+- Running applications with
+ - bursty, intermittent traffic that is hard to forecast, or
+ - low (<10%) average-to-peak traffic ratio
+- Developing, testing, prototyping and running in production new applications where the traffic pattern is unknown
- Integrating with serverless compute services like [Azure Functions](../azure-functions/functions-overview.md) See the [how to choose between provisioned throughput and serverless](throughput-serverless.md) article for more guidance on how to choose the offer that best fits your use-case.
@@ -69,14 +63,7 @@ You can find the same chart when using Azure Monitor, as described [here](monito
## <a id="performance"></a>Performance
-Serverless resources yield specific performance characteristics that are different from what provisioned throughput resources deliver:
--- **Availability**: After the serverless offer becomes generally available, the availability of serverless containers will be covered by a Service Level Agreement (SLA) of 99.9% when Availability Zones (zone redundancy) aren't used. The SLA is 99.99% when Availability Zones are used.-- **Latency**: After the serverless offer becomes generally available, the latency of serverless containers will be covered by a Service Level Objective (SLO) of 10 milliseconds or less for point-reads and 30 milliseconds or less for writes. A point-read operation consists in fetching a single item by its ID and partition key value.-- **Burstability**: After the serverless offer becomes generally available, the burstability of serverless containers will be covered by a Service Level Objective (SLO) of 95%. This means the maximum burstability can be attained at least 95% of the time.-
-> [!NOTE]
-> As any Azure preview, Azure Cosmos DB serverless is excluded from Service Level Agreements (SLA). The performance characteristics mentioned above are provided as a preview of what this offer will deliver when generally available.
+Serverless resources yield specific performance characteristics that are different from what provisioned throughput resources deliver. After the serverless offer becomes generally available, the latency of serverless containers will be covered by a Service Level Objective (SLO) of 10 milliseconds or less for point-reads and 30 milliseconds or less for writes. A point-read operation consists in fetching a single item by its ID and partition key value.
## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/table-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-introduction.md
@@ -5,7 +5,7 @@ author: SnehaGunda
ms.service: cosmos-db ms.subservice: cosmosdb-table ms.topic: overview
-ms.date: 11/25/2020
+ms.date: 01/08/2021
ms.author: sngun ---
@@ -15,7 +15,7 @@ ms.author: sngun
[Azure Cosmos DB](introduction.md) provides the Table API for applications that are written for Azure Table storage and that need premium capabilities like: * [Turnkey global distribution](distribute-data-globally.md).
-* [Dedicated throughput](partitioning-overview.md) worldwide.
+* [Dedicated throughput](partitioning-overview.md) worldwide (when using provisioned throughput).
* Single-digit millisecond latencies at the 99th percentile. * Guaranteed high availability. * Automatic secondary indexing.
@@ -39,7 +39,7 @@ If you currently use Azure Table Storage, you gain the following benefits by mov
| Indexing | Only primary index on PartitionKey and RowKey. No secondary indexes. | Automatic and complete indexing on all properties by default, with no index management. | | Query | Query execution uses index for primary key, and scans otherwise. | Queries can take advantage of automatic indexing on properties for fast query times. | | Consistency | Strong within primary region. Eventual within secondary region. | [Five well-defined consistency levels](consistency-levels.md) to trade off availability, latency, throughput, and consistency based on your application needs. |
-| Pricing | Storage-optimized. | Throughput-optimized. |
+| Pricing | Consumption-based. | Available in both [consumption-based](serverless.md) and [provisioned capacity](set-throughput.md) modes. |
| SLAs | 99.9% to 99.99% availability, depending on the replication strategy. | 99.999% read availability, 99.99% write availability on a single-region account and 99.999% write availability on multi-region accounts. [Comprehensive SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/) covering availability, latency, throughput and consistency. | ## Get started
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/table-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-support.md
@@ -4,7 +4,7 @@ description: Learn how Azure Cosmos DB Table API and Azure Storage Tables work t
ms.service: cosmos-db ms.subservice: cosmosdb-table ms.topic: how-to
-ms.date: 05/21/2020
+ms.date: 01/08/2021
author: sakash279 ms.author: akshanka ms.reviewer: sngun
@@ -15,6 +15,9 @@ ms.reviewer: sngun
Azure Cosmos DB Table API and Azure Table storage share the same table data model and expose the same create, delete, update, and query operations through their SDKs.
+> [!NOTE]
+> The [serverless capacity mode](serverless.md) is now available on Azure Cosmos DB's Table API.
+ [!INCLUDE [storage-table-cosmos-comparison](../../includes/storage-table-cosmos-comparison.md)] ## Developing with the Azure Cosmos DB Table API
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/throughput-serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/throughput-serverless.md
@@ -5,7 +5,7 @@ author: ThomasWeiss
ms.author: thweiss ms.service: cosmos-db ms.topic: conceptual
-ms.date: 12/23/2020
+ms.date: 01/08/2021
--- # How to choose between provisioned throughput and serverless
@@ -20,11 +20,11 @@ Azure Cosmos DB is available in two different capacity modes: [provisioned throu
| Criteria | Provisioned throughput | Serverless | | --- | --- | --- | | Status | Generally available | In preview |
-| Best suited for | Mission-critical workloads requiring predictable performance | Small-to-medium workloads with light and intermittent traffic that is hard to forecast |
+| Best suited for | Workloads with sustained traffic requiring predictable performance | Workloads with intermittent or unpredictable traffic and low average-to-peak traffic ratio |
| How it works | For each of your containers, you provision some amount of throughput expressed in [Request Units](request-units.md) per second. Every second, this amount of Request Units is available for your database operations. Provisioned throughput can be updated manually or adjusted automatically with [autoscale](provision-throughput-autoscale.md). | You run your database operations against your containers without having to provision any capacity. | | Geo-distribution | Available (unlimited number of Azure regions) | Unavailable (serverless accounts can only run in 1 Azure region) | | Maximum storage per container | Unlimited | 50 GB |
-| Performance | 99.99% to 99.999% availability covered by SLA<br>< 10 ms latency for point-reads and writes covered by SLA<br>99.99% guaranteed throughput covered by SLA | 99.9% to 99.99% availability covered by SLA<br>< 10 ms latency for point-reads and < 30 ms for writes covered by SLO<br>95% burstability covered by SLO |
+| Performance | < 10 ms latency for point-reads and writes covered by SLA | < 10 ms latency for point-reads and < 30 ms for writes covered by SLO |
| Billing model | Billing is done on a per-hour basis for the RU/s provisioned, regardless of how many RUs were consumed. | Billing is done on a per-hour basis for the amount of RUs consumed by your database operations. | > [!IMPORTANT]
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/manage-automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/manage-automation.md
@@ -3,7 +3,7 @@ title: Manage Azure costs with automation
description: This article explains how you can manage Azure costs with automation. author: bandersmsft ms.author: banders
-ms.date: 11/19/2020
+ms.date: 01/06/2021
ms.topic: conceptual ms.service: cost-management-billing ms.subservice: cost-management
@@ -52,6 +52,22 @@ We recommend that you make _no more than one request_ to the Usage Details API p
Use the API to get all the data you need at the highest-level scope available. Wait until all needed data is ingested before doing any filtering, grouping, or aggregated analysis. The API is optimized specifically to provide large amounts of unaggregated raw cost data. To learn more about scopes available in Cost Management, see [Understand and work with scopes](./understand-work-scopes.md). Once you've downloaded the needed data for a scope, use Excel to analyze data further with filters and pivot tables.
+### Notes about pricing
+
+If you want to reconcile usage and charges with your price sheet or invoice, note the following information.
+
+Price Sheet price behavior - The prices shown on the price sheet are the prices that you receive from Azure. They're scaled to a specific unit of measure. Unfortunately, the unit of measure doesn't always align to the unit of measure at which the actual resource usage and charges are emitted.
+
+Usage Details price behavior - Usage files show scaled information that may not match precisely with the price sheet. Specifically:
+
+- Unit Price - The price is scaled to match the unit of measure at which the charges are actually emitted by Azure resources. If scaling occurs, then the price won't match the price seen in the Price Sheet.
+- Unit of Measure - Represents the unit of measure at which charges are actually emitted by Azure resources.
+- Effective Price / Resource Rate - The price represents the actual rate that you end up paying per unit, after discounts are taken into account. It's the price that should be used with the Quantity to do Price * Quantity calculations to reconcile charges. The price takes into account the following scenarios and the scaled unit price that's also present in the files. As a result, it might differ from the scaled unit price.
+ - Tiered pricing - For example: $10 for the first 100 units, $8 for the next 100 units.
+ - Included quantity - For example: The first 100 units are free and then $10 per unit.
+ - Reservations
+ - Rounding that occurs during calculation ΓÇô Rounding takes into account the consumed quantity, tiered/included quantity pricing, and the scaled unit price.
+ ## Example Usage Details API requests The following example requests are used by Microsoft customers to address common scenarios that you might come across.
@@ -320,7 +336,7 @@ You can configure budgets to start automated actions using Azure Action Groups.
## Data latency and rate limits
-We recommend that you call the APIs no more than once per day. Cost Management data is refreshed every four hours as new usage data is received from Azure resource providers. Calling more frequently won't provide any additional data. Instead, it will create increased load. To learn more about how often data changes and how data latency is handled, see [Understand cost management data](understand-cost-mgt-data.md).
+We recommend that you call the APIs no more than once per day. Cost Management data is refreshed every four hours as new usage data is received from Azure resource providers. Calling more frequently doesn't provide more data. Instead, it creates increased load. To learn more about how often data changes and how data latency is handled, see [Understand cost management data](understand-cost-mgt-data.md).
### Error code 429 - Call count has exceeded rate limits
data-lake-analytics https://docs.microsoft.com/en-us/azure/data-lake-analytics/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: hrasheed-msft ms.author: hrasheed
data-lake-analytics https://docs.microsoft.com/en-us/azure/data-lake-analytics/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: hrasheed-msft ms.author: hrasheed
data-lake-store https://docs.microsoft.com/en-us/azure/data-lake-store/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference author: twooley ms.author: twooley
data-lake-store https://docs.microsoft.com/en-us/azure/data-lake-store/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: normesta ms.author: normesta
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-contact-microsoft-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-contact-microsoft-support.md
@@ -6,7 +6,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: how-to
-ms.date: 12/17/2020
+ms.date: 01/07/2021
ms.author: alkohli ---
@@ -73,7 +73,7 @@ This information only applies to Azure Stack device. The process to report hardw
* A Field Replacement Unit (FRU) for the failed hardware part is sent. Currently, power supply units and solid-state drives are the only supported FRUs. * Only FRUs are replaced within the next business day, everything else requires a full system replacement (FSR) to be shipped.
-3. If a Support ticket is raised before 4:30 pm local time (Monday to Friday), an onsite technician is dispatched the next business day to your location to perform a FRU replacement. A full system replacement typically will take much longer because the parts are shipped from our factory and could be subject to transportation and customs delays.
+3. If it is determined that a FRU replacement is needed by 1 PM local time (Monday to Friday), an onsite technician is dispatched the next business day to your location to perform a FRU replacement. A full system replacement typically will take much longer because the parts are shipped from our factory and could be subject to transportation and customs delays.
## Manage a support request
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-deploy-edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-edge.md
@@ -1,6 +1,6 @@
--- title: Deploy IoT Edge security module
-description: Learn about how to deploy an Defender for IoT security agent on IoT Edge.
+description: Learn about how to deploy a Defender for IoT security agent on IoT Edge.
services: defender-for-iot ms.service: defender-for-iot documentationcenter: na
@@ -26,7 +26,7 @@ In this article, you'll learn how to deploy a security module on your IoT Edge d
## Deploy security module
-Use the following steps to deploy an Defender for IoT security module for IoT Edge.
+Use the following steps to deploy a Defender for IoT security module for IoT Edge.
### Prerequisites
@@ -47,13 +47,13 @@ Use the following steps to deploy an Defender for IoT security module for IoT Ed
1. From the Azure portal, open **Marketplace**.
-1. Select **Internet of Things**, then search for **Defender for IoT** and select it.
+1. Select **Internet of Things**, then search for **Azure Security Center for IoT** and select it.
:::image type="content" source="media/howto/edge-onboarding-8.png" alt-text="Select Defender for IoT":::
-1. Click **Create** to configure the deployment.
+1. Select **Create** to configure the deployment.
-1. Choose the Azure **Subscription** of your IoT Hub, then select your **IoT Hub**.<br>Select **Deploy to a device** to target a single device or select **Deploy at Scale** to target multiple devices, and click **Create**. For more information about deploying at scale, see [How to deploy](../iot-edge/how-to-deploy-at-scale.md).
+1. Choose the Azure **Subscription** of your IoT Hub, then select your **IoT Hub**.<br>Select **Deploy to a device** to target a single device or select **Deploy at Scale** to target multiple devices, and select **Create**. For more information about deploying at scale, see [How to deploy](../iot-edge/how-to-deploy-at-scale.md).
>[!Note] >If you selected **Deploy at Scale**, add the device name and details before continuing to the **Add Modules** tab in the following instructions.
@@ -64,7 +64,7 @@ Complete each step to complete your IoT Edge deployment for Defender for IoT.
1. Select the **AzureSecurityCenterforIoT** module. 1. On the **Module Settings** tab, change the **name** to **azureiotsecurity**.
-1. On the **Enviroment Variables** tab, add a variable if needed (for example, debug level).
+1. On the **Environment Variables** tab, add a variable if needed (for example, you can add *debug level* and set it to one of the following values: "Fatal", "Error", "Warning", or "Information").
1. On the **Container Create Options** tab, add the following configuration: ``` json
@@ -108,8 +108,12 @@ Complete each step to complete your IoT Edge deployment for Defender for IoT.
#### Step 2: Runtime settings 1. Select **Runtime Settings**.
-1. Under **Edge Hub**, change the **Image** to **mcr.microsoft.com/azureiotedge-hub:1.0.8.3**.
-1. Verify **Create Options** is set to the following configuration:
+2. Under **Edge Hub**, change the **Image** to **mcr.microsoft.com/azureiotedge-hub:1.0.8.3**.
+
+ >[!Note]
+ > Currently, version 1.0.8.3 or older is supported.
+
+3. Verify **Create Options** is set to the following configuration:
``` json {
@@ -135,9 +139,9 @@ Complete each step to complete your IoT Edge deployment for Defender for IoT.
} ```
-1. Select **Save**.
+4. Select **Save**.
-1. Select **Next**.
+5. Select **Next**.
#### Step 3: Specify routes
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/concepts-models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
@@ -89,53 +89,7 @@ This section contains an example of a typical model, written as a DTDL interface
Consider that planets may also interact with **moons** that are their satellites, and may contain **craters**. In the example below, the `Planet` model expresses connections to these other entities by referencing two external modelsΓÇö`Moon` and `Crater`. These models are also defined in the example code below, but are kept very simple so as not to detract from the primary `Planet` example.
-```json
-[
- {
- "@id": "dtmi:com:contoso:Planet;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2",
- "displayName": "Planet",
- "contents": [
- {
- "@type": "Property",
- "name": "name",
- "schema": "string"
- },
- {
- "@type": "Property",
- "name": "mass",
- "schema": "double"
- },
- {
- "@type": "Telemetry",
- "name": "Temperature",
- "schema": "double"
- },
- {
- "@type": "Relationship",
- "name": "satellites",
- "target": "dtmi:com:contoso:Moon;1"
- },
- {
- "@type": "Component",
- "name": "deepestCrater",
- "schema": "dtmi:com:contoso:Crater;1"
- }
- ]
- },
- {
- "@id": "dtmi:com:contoso:Crater;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2"
- },
- {
- "@id": "dtmi:com:contoso:Moon;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2"
- }
-]
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/Planet-Crater-Moon.json":::
The fields of the model are:
@@ -167,57 +121,7 @@ Sometimes, you may want to specialize a model further. For example, it might be
The following example re-imagines the *Planet* model from the earlier DTDL example as a subtype of a larger *CelestialBody* model. The "parent" model is defined first, and then the "child" model builds on it by using the field `extends`.
-```json
-[
- {
- "@id": "dtmi:com:contoso:CelestialBody;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2",
- "displayName": "Celestial body",
- "contents": [
- {
- "@type": "Property",
- "name": "name",
- "schema": "string"
- },
- {
- "@type": "Property",
- "name": "mass",
- "schema": "double"
- },
- {
- "@type": "Telemetry",
- "name": "temperature",
- "schema": "double"
- }
- ]
- },
- {
- "@id": "dtmi:com:contoso:Planet;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2",
- "displayName": "Planet",
- "extends": "dtmi:com:contoso:CelestialBody;1",
- "contents": [
- {
- "@type": "Relationship",
- "name": "satellites",
- "target": "dtmi:com:contoso:Moon;1"
- },
- {
- "@type": "Component",
- "name": "deepestCrater",
- "schema": "dtmi:com:contoso:Crater;1"
- }
- ]
- },
- {
- "@id": "dtmi:com:contoso:Crater;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2"
- }
-]
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/CelestialBody-Planet-Crater.json":::
In this example, *CelestialBody* contributes a name, a mass, and a temperature to *Planet*. The `extends` section is an interface name, or an array of interface names (allowing the extending interface to inherit from multiple parent models if desired).
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/concepts-query-units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-query-units.md
@@ -36,34 +36,7 @@ The Azure Digital Twins [SDKs](how-to-use-apis-sdks.md) allow you to extract the
The following code snippet demonstrates how you can extract the query charges incurred when calling the query API. It iterates over the response pages first to access the query-charge header, and then iterates over the digital twin results within each page.
-```csharp
-AsyncPageable<string> asyncPageableResponseWithCharge = client.QueryAsync("SELECT * FROM digitaltwins");
-int pageNum = 0;
-
-// The "await" keyword here is required, as a call is made when fetching a new page.
-
-await foreach (Page<string> page in asyncPageableResponseWithCharge.AsPages())
-{
- Console.WriteLine($"Page {++pageNum} results:");
-
- // Extract the query-charge header from the page
-
- if (QueryChargeHelper.TryGetQueryCharge(page, out float queryCharge))
- {
- Console.WriteLine($"Query charge was: {queryCharge}");
- }
-
- // Iterate over the twin instances.
-
- // The "await" keyword is not required here, as the paged response is local.
-
- foreach (string response in page.Values)
- {
- BasicDigitalTwin twin = JsonSerializer.Deserialize<BasicDigitalTwin>(response);
- Console.WriteLine($"Found digital twin '{twin.Id}'");
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/getQueryCharges.cs":::
## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/concepts-route-events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-route-events.md
@@ -76,11 +76,7 @@ To create an event route, you can use the Azure Digital Twins [**data plane APIs
Here is an example of creating an event route within a client application, using the `CreateOrReplaceEventRouteAsync` [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true) call:
-```csharp
-string eventFilter = "$eventType = 'DigitalTwinTelemetryMessages' or $eventType = 'DigitalTwinLifecycleNotification'";
-var er = new DigitalTwinsEventRoute("endpointName", eventFilter);
-await client.CreateOrReplaceEventRouteAsync("routeName", er);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/eventRoute_operations.cs" id="CreateEventRoute":::
1. First, a `DigitalTwinsEventRoute` object is created, and the constructor takes the name of an endpoint. This `endpointName` field identifies an endpoint such as an Event Hub, Event Grid, or Service Bus. These endpoints must be created in your subscription and attached to Azure Digital Twins using control plane APIs before making this registration call.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/concepts-twins-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-twins-graph.md
@@ -48,7 +48,7 @@ Below is a snippet of client code that uses the [DigitalTwins APIs](/rest/api/di
You can initialize the properties of a twin when it is created, or set them later. To create a twin with initialized properties, create a JSON document that provides the necessary initialization values.
-[!INCLUDE [Azure Digital Twins code: create twin](../../includes/digital-twins-code-create-twin.md)]
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="CreateTwin_noHelper":::
You can also use a helper class called `BasicDigitalTwin` to store property fields in a "twin" object more directly, as an alternative to using a dictionary. For more information about the helper class and examples of its use, see the [*Create a digital twin*](how-to-manage-twin.md#create-a-digital-twin) section of *How-to: Manage digital twins*.
@@ -59,25 +59,7 @@ You can also use a helper class called `BasicDigitalTwin` to store property fiel
Here is some example client code that uses the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins) to build a relationship between a *Floor*-type digital twin called *GroundFloor* and a *Room*-type digital twin called *Cafe*.
-```csharp
-// Create Twins, using functions similar to the previous sample
-await CreateRoom("Cafe", 70, 66);
-await CreateFloor("GroundFloor", averageTemperature=70);
-// Create relationships
-var relationship = new BasicRelationship
-{
- TargetId = "Cafe",
- Name = "contains"
-};
-try
-{
- string relId = $"GroundFloor-contains-Cafe";
- await client.CreateOrReplaceRelationshipAsync<BasicRelationship>("GroundFloor", relId, relationship);
-} catch(ErrorResponseException e)
-{
- Console.WriteLine($"*** Error creating relationship: {e.Response.StatusCode}");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_other.cs" id="CreateRelationship_3":::
## JSON representations of graph elements
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-authenticate-client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-authenticate-client.md
@@ -54,10 +54,7 @@ First, include the SDK package `Azure.DigitalTwins.Core` and the `Azure.Identity
You'll also need to add the following using statements to your project code:
-```csharp
-using Azure.Identity;
-using Azure.DigitalTwins.Core;
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/authentication.cs" id="Azure_Digital_Twins_dependencies":::
Then, add code to obtain credentials using one of the methods in `Azure.Identity`.
@@ -69,23 +66,7 @@ To use the default Azure credentials, you'll need the Azure Digital Twins instan
Here is a code sample to add a `DefaultAzureCredential` to your project:
-```csharp
-// The URL of your instance, starting with the protocol (https://)
-private static string adtInstanceUrl = "https://<your-Azure-Digital-Twins-instance-URL>";
-
-//...
-
-DigitalTwinsClient client;
-try
-{
- var credential = new DefaultAzureCredential();
- client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credential);
-} catch(Exception e)
-{
- Console.WriteLine($"Authentication or client creation error: {e.Message}");
- Environment.Exit(0);
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/authentication.cs" id="DefaultAzureCredential_full":::
#### Set up local Azure credentials
@@ -101,12 +82,7 @@ To use the default Azure credentials, you'll need the Azure Digital Twins instan
In an Azure function, you can use the managed identity credentials like this:
-```csharp
-ManagedIdentityCredential cred = new ManagedIdentityCredential(adtAppId);
-DigitalTwinsClientOptions opts =
- new DigitalTwinsClientOptions { Transport = new HttpClientTransport(httpClient) });
-client = new DigitalTwinsClient(new Uri(adtInstanceUrl), cred, opts);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/authentication.cs" id="ManagedIdentityCredential":::
### InteractiveBrowserCredential method
@@ -119,27 +95,7 @@ To use the interactive browser credentials, you will need an **app registration*
Here is an example of the code to create an authenticated SDK client using `InteractiveBrowserCredential`.
-```csharp
-// Your client / app registration ID
-private static string clientId = "<your-client-ID>";
-// Your tenant / directory ID
-private static string tenantId = "<your-tenant-ID>";
-// The URL of your instance, starting with the protocol (https://)
-private static string adtInstanceUrl = "https://<your-Azure-Digital-Twins-instance-URL>";
-
-//...
-
-DigitalTwinsClient client;
-try
-{
- var credential = new InteractiveBrowserCredential(tenantId, clientId);
- client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credential);
-} catch(Exception e)
-{
- Console.WriteLine($"Authentication or client creation error: {e.Message}");
- Environment.Exit(0);
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/authentication.cs" id="InteractiveBrowserCredential":::
>[!NOTE] > While you can place the client ID, tenant ID and instance URL directly into the code as shown above, it's a good idea to have your code get these values from a configuration file or environment variable instead.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-create-azure-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-azure-function.md
@@ -1,8 +1,8 @@
--- # Mandatory fields.
-title: Set up an Azure function for processing data
+title: Set up a function in Azure for processing data
titleSuffix: Azure Digital Twins
-description: See how to create an Azure function that can access and be triggered by digital twins.
+description: See how to create a function in Azure that can access and be triggered by digital twins.
author: baanders ms.author: baanders # Microsoft employees only ms.date: 8/27/2020
@@ -15,27 +15,27 @@ ms.service: digital-twins
# manager: MSFT-alias-of-manager-or-PM-counterpart ---
-# Connect Azure Functions apps for processing data
+# Connect function apps in Azure for processing data
-Updating digital twins based on data is handled using [**event routes**](concepts-route-events.md) through compute resources, such as [Azure Functions](../azure-functions/functions-overview.md). An Azure function can be used to update a digital twin in response to:
+Updating digital twins based on data is handled using [**event routes**](concepts-route-events.md) through compute resources, such as a function that's made by using [Azure Functions](../azure-functions/functions-overview.md). Functions can be used to update a digital twin in response to:
* device telemetry data coming from IoT Hub * property change or other data coming from another digital twin within the twin graph
-This article walks you through creating an Azure function for use with Azure Digital Twins.
+This article walks you through creating a function in Azure for use with Azure Digital Twins.
Here is an overview of the steps it contains:
-1. Create an Azure Functions app in Visual Studio
-2. Write an Azure function with an [Event Grid](../event-grid/overview.md) trigger
+1. Create an Azure Functions project in Visual Studio
+2. Write an function with an [Event Grid](../event-grid/overview.md) trigger
3. Add authentication code to the function (to be able to access Azure Digital Twins) 4. Publish the function app to Azure
-5. Set up [security](concepts-security.md) access for the Azure function app
+5. Set up [security](concepts-security.md) access for the function app
## Prerequisite: Set up Azure Digital Twins instance [!INCLUDE [digital-twins-prereq-instance.md](../../includes/digital-twins-prereq-instance.md)]
-## Create an Azure Functions app in Visual Studio
+## Create a function app in Visual Studio
In Visual Studio 2019, select _File > New > Project_ and search for the _Azure Functions_ template, select _Next_.
@@ -47,15 +47,15 @@ Specify a name for the function app and select _Create_.
Select the type of the function app *Event Grid trigger* and select _Create_.
-:::image type="content" source="media/how-to-create-azure-function/eventgridtrigger-function.png" alt-text="Visual Studio: Azure function project trigger dialog":::
+:::image type="content" source="media/how-to-create-azure-function/eventgridtrigger-function.png" alt-text="Visual Studio: Azure Functions project trigger dialog":::
-Once your function app is created, your visual studio will have auto populated code sample in **function.cs** file in your project folder. This short Azure function is used to log events.
+Once your function app is created, your visual studio will have auto populated code sample in **function.cs** file in your project folder. This short function is used to log events.
:::image type="content" source="media/how-to-create-azure-function/visual-studio-sample-code.png" alt-text="Visual Studio: Project window with sample code":::
-## Write an Azure function with an Event Grid trigger
+## Write a function with an Event Grid trigger
-You can write an Azure function by adding SDK to your function app. The function app interacts with Azure Digital Twins using the [Azure Digital Twins SDK for .NET (C#)](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
+You can write a function by adding SDK to your function app. The function app interacts with Azure Digital Twins using the [Azure Digital Twins SDK for .NET (C#)](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
In order to use the SDK, you'll need to include the following packages into your project. You can either install the packages using visual studio NuGet package manager or add the packages using `dotnet` command-line tool. Choose either of these methods:
@@ -73,103 +73,56 @@ For configuration of the Azure SDK pipeline to set up properly for Azure Functio
**Option 2. Add packages using `dotnet` command-line tool:**
+Alternatively, you can use the following `dotnet add` commands in a command line tool:
```cmd/sh
-dotnet add package Azure.DigitalTwins.Core --version 1.0.0-preview.3
-dotnet add package Azure.identity --version 1.2.2
dotnet add package System.Net.Http dotnet add package Azure.Core.Pipeline ```
-Next, in your Visual Studio Solution Explorer, open _function.cs_ file where you have sample code and add the following _using_ statements to your Azure function.
-```csharp
-using Azure.DigitalTwins.Core;
-using Azure.Identity;
-using System.Net.Http;
-using Azure.Core.Pipeline;
-```
-## Add authentication code to the Azure function
+Then, add two more dependencies to your project that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
+ * [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
+ * [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
+
+Next, in your Visual Studio Solution Explorer, open _function.cs_ file where you have sample code and add the following _using_ statements to your function.
+
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="Function_dependencies":::
+
+## Add authentication code to the function
-You will now declare class level variables and add authentication code that will allow the function to access Azure Digital Twins. You will add the following to your Azure function in the {your function name}.cs file.
+You will now declare class level variables and add authentication code that will allow the function to access Azure Digital Twins. You will add the following to your function in the {your function name}.cs file.
* Read ADT service URL as an environment variable. It is a good practice to read the service URL from an environment variable, rather than hard-coding it in the function.
-```csharp
-private static readonly string adtInstanceUrl = Environment.GetEnvironmentVariable("ADT_SERVICE_URL");
-```
+
+ :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="ADT_service_URL":::
+ * A static variable to hold an HttpClient instance. HttpClient is relatively expensive to create, and we want to avoid having to do this for every function invocation.
-```csharp
-private static readonly HttpClient httpClient = new HttpClient();
-```
-* You can use the managed identity credentials in Azure function.
-```csharp
-ManagedIdentityCredential cred = new ManagedIdentityCredential("https://digitaltwins.azure.net");
-```
+
+ :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="HTTP_client":::
+
+* You can use the managed identity credentials in Azure Functions.
+ :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="ManagedIdentityCredential":::
+ * Add a local variable _DigitalTwinsClient_ inside of your function to hold your Azure Digital Twins client instance to the function project. Do *not* make this variable static inside your class.
-```csharp
-DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), cred, new DigitalTwinsClientOptions { Transport = new HttpClientTransport(httpClient) });
-```
+ :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="DigitalTwinsClient":::
+ * Add a null check for _adtInstanceUrl_ and wrap your function logic in a try catch block to catch any exceptions. After these changes, your function code will be similar to the following:
-```csharp
-// Default URL for triggering event grid function in the local environment.
-// http://localhost:7071/runtime/webhooks/EventGrid?functionName={functionname}
-using System;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Host;
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Azure.WebJobs.Extensions.EventGrid;
-using Microsoft.Extensions.Logging;
-using Azure.DigitalTwins.Core;
-using Azure.Identity;
-using System.Net.Http;
-using Azure.Core.Pipeline;
-
-namespace adtIngestFunctionSample
-{
- public class Function1
- {
- //Your Digital Twin URL is stored in an application setting in Azure Functions
- private static readonly string adtInstanceUrl = Environment.GetEnvironmentVariable("ADT_SERVICE_URL");
- private static readonly HttpClient httpClient = new HttpClient();
-
- [FunctionName("TwinsFunction")]
- public void Run([EventGridTrigger] EventGridEvent eventGridEvent, ILogger log)
- {
- log.LogInformation(eventGridEvent.Data.ToString());
- if (adtInstanceUrl == null) log.LogError("Application setting \"ADT_SERVICE_URL\" not set");
- try
- {
- //Authenticate with Digital Twins
- ManagedIdentityCredential cred = new ManagedIdentityCredential("https://digitaltwins.azure.net");
- DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), cred, new DigitalTwinsClientOptions { Transport = new HttpClientTransport(httpClient) });
- log.LogInformation($"ADT service client connection created.");
- /*
- * Add your business logic here
- */
- }
- catch (Exception e)
- {
- log.LogError(e.Message);
- }
-
- }
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs":::
## Publish the function app to Azure
-To publish the function app to Azure, right-select the function project (not the solution) in Solution Explorer, and choose **Publish**.
+To publish the project to a function app in Azure, right-select the function project (not the solution) in Solution Explorer, and choose **Publish**.
> [!IMPORTANT]
-> Publishing an Azure function will incur additional charges on your subscription, independent of Azure Digital Twins.
+> Publishing to a function app in Azure incurs additional charges on your subscription, independent of Azure Digital Twins.
-:::image type="content" source="media/how-to-create-azure-function/publish-azure-function.png" alt-text="Visual Studio: publish Azure function ":::
+:::image type="content" source="media/how-to-create-azure-function/publish-azure-function.png" alt-text="Visual Studio: publish function to Azure":::
Select **Azure** as the publishing target and select **Next**.
-:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-1.png" alt-text="Visual Studio: publish Azure function dialog, select Azure ":::
+:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-1.png" alt-text="Visual Studio: publish Azure Functions dialog, select Azure ":::
:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-2.png" alt-text="Visual Studio: publish function dialog, select Azure Function App(Windows) or (Linux) based on your machine":::
@@ -180,16 +133,16 @@ Select **Azure** as the publishing target and select **Next**.
:::image type="content" source="media/how-to-create-azure-function/publish-azure-function-5.png" alt-text="Visual Studio: publish function dialog, Select your function app from the list, and finish"::: On the following page, enter the desired name for the new function app, a resource group, and other details.
-For your Functions app to be able to access Azure Digital Twins, it needs to have a system-managed identity and have permissions to access your Azure Digital Twins instance.
+For your function app to be able to access Azure Digital Twins, it needs to have a system-managed identity and have permissions to access your Azure Digital Twins instance.
Next, you can set up security access for the function using CLI or Azure portal. Choose either of these methods:
-## Set up security access for the Azure function app
-You can set up security access for the Azure function app using one of these options:
+## Set up security access for the function app
+You can set up security access for the function app using one of these options:
-### Option 1: Set up security access for the Azure function app using CLI
+### Option 1: Set up security access for the function app using CLI
-The Azure function skeleton from earlier examples requires that a bearer token to be passed to it, in order to be able to authenticate with Azure Digital Twins. To make sure that this bearer token is passed, you'll need to set up [Managed Service Identity (MSI)](../active-directory/managed-identities-azure-resources/overview.md) for the function app. This only needs to be done once for each function app.
+The function skeleton from earlier examples requires that a bearer token to be passed to it, in order to be able to authenticate with Azure Digital Twins. To make sure that this bearer token is passed, you'll need to set up [Managed Service Identity (MSI)](../active-directory/managed-identities-azure-resources/overview.md) for the function app. This only needs to be done once for each function app.
You can create system-managed identity and assign the function app's identity to the _**Azure Digital Twins Data Owner**_ role for your Azure Digital Twins instance. This will give the function app permission in the instance to perform data plane activities. Then, make the URL of Azure Digital Twins instance accessible to your function by setting an environment variable.
@@ -215,7 +168,7 @@ Lastly, you can make the URL of your Azure Digital Twins instance accessible to
```azurecli-interactive az functionapp config appsettings set -g <your-resource-group> -n <your-App-Service-(function-app)-name> --settings "ADT_SERVICE_URL=https://<your-Azure-Digital-Twins-instance-hostname>" ```
-### Option 2: Set up security access for the Azure function app using Azure portal
+### Option 2: Set up security access for the function app using Azure portal
A system assigned managed identity enables Azure resources to authenticate to cloud services (for example, Azure Key Vault) without storing credentials in code. Once enabled, all necessary permissions can be granted via Azure role-based-access-control. The lifecycle of this type of managed identity is tied to the lifecycle of this resource. Additionally, each resource (for example, Virtual Machine) can only have one system assigned managed identity.
@@ -266,11 +219,11 @@ You can get ADT_INSTANCE_URL by appending **_https://_** to your instance host n
You can now create an application setting following the steps below:
-* Search for your Azure function using function name in the search bar and select the function from the list
+* Search for your app using the function app name in the search bar and select the function app from the list
* Select _Configuration_ on the navigation bar on the left to create a new application setting * In the _Application settings_ tab, select _+ New application setting_
-:::image type="content" source="media/how-to-create-azure-function/search-for-azure-function.png" alt-text="Azure portal: Search for existing Azure function":::
+:::image type="content" source="media/how-to-create-azure-function/search-for-azure-function.png" alt-text="Azure portal: Search for an existing function app":::
:::image type="content" source="media/how-to-create-azure-function/application-setting.png" alt-text="Azure portal: Configure application settings":::
@@ -296,10 +249,10 @@ You can view that application settings are updated by selecting _Notifications_
## Next steps
-In this article, you followed the steps to set up an Azure function for use with Azure Digital Twins. Next, you can subscribe your Azure function to Event Grid, to listen on an endpoint. This endpoint could be:
+In this article, you followed the steps to set up a function app in Azure for use with Azure Digital Twins. Next, you can subscribe your function to Event Grid, to listen on an endpoint. This endpoint could be:
* An Event Grid endpoint attached to Azure Digital Twins to process messages coming from Azure Digital Twins itself (such as property change messages, telemetry messages generated by [digital twins](concepts-twins-graph.md) in the twin graph, or life-cycle messages) * The IoT system topics used by IoT Hub to send telemetry and other device events * An Event Grid endpoint receiving messages from other services
-Next, see how to build on your basic Azure function to ingest IoT Hub data into Azure Digital Twins:
-* [*How-to: Ingest telemetry from IoT Hub*](how-to-ingest-iot-hub-data.md)
\ No newline at end of file
+Next, see how to build on your basic function to ingest IoT Hub data into Azure Digital Twins:
+* [*How-to: Ingest telemetry from IoT Hub*](how-to-ingest-iot-hub-data.md)
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-create-custom-sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-custom-sdks.md
@@ -101,17 +101,7 @@ Whenever an error occurs in the SDK (including HTTP errors such as 404), the SDK
Here is a code snippet that tries to add a twin and catches any errors in this process:
-```csharp
-try
-{
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>(id, initData);
- Console.WriteLine($"Created a twin successfully: {id}");
-}
-catch (ErrorResponseException e)
-{
- Console.WriteLine($"*** Error creating twin {id}: {e.Response.StatusCode}");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="CreateTwin_errorHandling":::
### Paging
@@ -119,62 +109,15 @@ AutoRest generates two types of paging patterns for the SDK:
* One for all APIs except the Query API * One for the Query API
-In the non-query paging pattern, here is a code snippet showing how to retrieve a paged list of outgoing relationships from Azure Digital Twins:
-
-```csharp
- try
- {
- // List the relationships.
- AsyncPageable<BasicRelationship> results = client.GetRelationshipsAsync<BasicRelationship>(srcId);
- Console.WriteLine($"Twin {srcId} is connected to:");
- // Iterate through the relationships found.
- int numberOfRelationships = 0;
- await foreach (string rel in results)
- {
- ++numberOfRelationships;
- // Do something with each relationship found
- Console.WriteLine($"Found relationship-{rel.Name}->{rel.TargetId}");
- }
- Console.WriteLine($"Found {numberOfRelationships} relationships on {srcId}");
-} catch (RequestFailedException rex) {
- Console.WriteLine($"Relationship retrieval error: {rex.Status}:{rex.Message}");
-}
-```
+In the non-query paging pattern, here is a sample method showing how to retrieve a paged list of outgoing relationships from Azure Digital Twins:
+
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="FindOutgoingRelationshipsMethod":::
The second pattern is only generated for the Query API. It uses a `continuationToken` explicitly. Here is an example with this pattern:
-```csharp
-string query = "SELECT * FROM digitaltwins";
-string conToken = null; // continuation token from the query
-int page = 0;
-try
-{
- // Repeat the query while there are pages
- do
- {
- QuerySpecification spec = new QuerySpecification(query, conToken);
- QueryResult qr = await client.Query.QueryTwinsAsync(spec);
- page++;
- Console.WriteLine($"== Query results page {page}:");
- if (qr.Items != null)
- {
- // Query returns are JObjects
- foreach(JObject o in qr.Items)
- {
- string twinId = o.Value<string>("$dtId");
- Console.WriteLine($" Found {twinId}");
- }
- }
- Console.WriteLine($"== End query results page {page}");
- conToken = qr.ContinuationToken;
- } while (conToken != null);
-} catch (ErrorResponseException e)
-{
- Console.WriteLine($"*** Error in twin query: ${e.Response.StatusCode}");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/queries.cs" id="PagedQuery":::
## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-ingest-iot-hub-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-iot-hub-data.md
@@ -19,49 +19,36 @@ ms.service: digital-twins
Azure Digital Twins is driven with data from IoT devices and other sources. A common source for device data to use in Azure Digital Twins is [IoT Hub](../iot-hub/about-iot-hub.md).
-The process for ingesting data into Azure Digital Twins is to set up an external compute resource, such as an [Azure function](../azure-functions/functions-overview.md), that receives the data and uses the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins) to set properties or fire telemetry events on [digital twins](concepts-twins-graph.md) accordingly.
+The process for ingesting data into Azure Digital Twins is to set up an external compute resource, such as a function that's made by using [Azure Functions](../azure-functions/functions-overview.md). The function receives the data and uses the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins) to set properties or fire telemetry events on [digital twins](concepts-twins-graph.md) accordingly.
-This how-to document walks through the process for writing an Azure function that can ingest telemetry from IoT Hub.
+This how-to document walks through the process for writing a function that can ingest telemetry from IoT Hub.
## Prerequisites Before continuing with this example, you'll need to set up the following resources as prerequisites: * **An IoT hub**. For instructions, see the *Create an IoT Hub* section of [this IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md).
-* **An Azure Function** with the correct permissions to call your digital twin instance. For instructions, see [*How-to: Set up an Azure function for processing data*](how-to-create-azure-function.md).
+* **A function** with the correct permissions to call your digital twin instance. For instructions, see [*How-to: Set up a function in Azure for processing data*](how-to-create-azure-function.md).
* **An Azure Digital Twins instance** that will receive your device telemetry. For instructions, see [*How-to: Set up an Azure Digital Twins instance and authentication*](./how-to-set-up-instance-portal.md). ### Example telemetry scenario
-This how-to outlines how to send messages from IoT Hub to Azure Digital Twins, using an Azure function. There are many possible configurations and matching strategies you can use for sending messages, but the example for this article contains the following parts:
+This how-to outlines how to send messages from IoT Hub to Azure Digital Twins, using a function in Azure. There are many possible configurations and matching strategies you can use for sending messages, but the example for this article contains the following parts:
* A thermometer device in IoT Hub, with a known device ID * A digital twin to represent the device, with a matching ID > [!NOTE] > This example uses a straightforward ID match between the device ID and a corresponding digital twin's ID, but it is possible to provide more sophisticated mappings from the device to its twin (such as with a mapping table).
-Whenever a temperature telemetry event is sent by the thermostat device, an Azure function processes the telemetry and the *temperature* property of the digital twin should update. This scenario is outlined in a diagram below:
+Whenever a temperature telemetry event is sent by the thermostat device, a function processes the telemetry and the *temperature* property of the digital twin should update. This scenario is outlined in a diagram below:
-:::image type="content" source="media/how-to-ingest-iot-hub-data/events.png" alt-text="A diagram showing a flow chart. In the chart, an IoT Hub device sends Temperature telemetry through IoT Hub to an Azure Function, which updates a temperature property on a twin in Azure Digital Twins." border="false":::
+:::image type="content" source="media/how-to-ingest-iot-hub-data/events.png" alt-text="A diagram showing a flow chart. In the chart, an IoT Hub device sends Temperature telemetry through IoT Hub to a function in Azure, which updates a temperature property on a twin in Azure Digital Twins." border="false":::
## Add a model and twin You can add/upload a model using the CLI command below, and then create a twin using this model that will be updated with information from IoT Hub. The model looks like this:
-```JSON
-{
- "@id": "dtmi:contosocom:DigitalTwins:Thermostat;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2",
- "contents": [
- {
- "@type": "Property",
- "name": "Temperature",
- "schema": "double"
- }
- ]
-}
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/Thermostat.json":::
To **upload this model to your twins instance**, open the Azure CLI and run the following command:
@@ -94,9 +81,9 @@ Output of a successful twin create command should look like this:
} ```
-## Create an Azure function
+## Create a function
-This section uses the same Visual Studio startup steps and Azure function skeleton from [*How-to: Set up an Azure function for processing data*](how-to-create-azure-function.md). The skeleton handles authentication and creates a service client, ready for you to process data and call Azure Digital Twins APIs in response.
+This section uses the same Visual Studio startup steps and function skeleton from [*How-to: Set up a function for processing data*](how-to-create-azure-function.md). The skeleton handles authentication and creates a service client, ready for you to process data and call Azure Digital Twins APIs in response.
In the steps that follow, you'll add specific code to it for processing IoT telemetry events from IoT Hub.
@@ -108,90 +95,21 @@ Different devices may structure their messages differently, so the code for **th
The following code shows an example for a simple device that sends telemetry as JSON. This sample is fully explored in [*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md). The following code finds the device ID of the device that sent the message, as well as the temperature value.
-```csharp
-JObject deviceMessage = (JObject)JsonConvert.DeserializeObject(eventGridEvent.Data.ToString());
-string deviceId = (string)deviceMessage["systemProperties"]["iothub-connection-device-id"];
-var temperature = deviceMessage["body"]["Temperature"];
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/IoTHubToTwins.cs" id="Find_device_ID_and_temperature":::
The next code sample takes the ID and temperature value and uses them to "patch" (make updates to) that twin.
-```csharp
-//Update twin using device temperature
-var updateTwinData = new JsonPatchDocument();
-updateTwinData.AppendReplace("/Temperature", temperature.Value<double>());
-await client.UpdateDigitalTwinAsync(deviceId, updateTwinData);
-...
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/IoTHubToTwins.cs" id="Update_twin_with_device_temperature":::
-### Update your Azure function code
+### Update your function code
-Now that you understand the code from the earlier samples, open your Azure function from the [*Prerequisites*](#prerequisites) section in Visual Studio. (If you don't have an Azure function, visit the link in the prerequisites to create one now).
+Now that you understand the code from the earlier samples, open your function from the [*Prerequisites*](#prerequisites) section in Visual Studio. (If you don't have a function that was created in Azure, visit the link in the prerequisites to create one now).
-Replace your Azure function's code with this sample code.
+Replace your function's code with this sample code.
-```csharp
-using System;
-using System.Net.Http;
-using Azure.Core.Pipeline;
-using Azure.DigitalTwins.Core;
-using Azure.DigitalTwins.Core.Serialization;
-using Azure.Identity;
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Extensions.EventGrid;
-using Microsoft.Extensions.Logging;
-using Newtonsoft.Json;
-using Newtonsoft.Json.Linq;
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/IoTHubToTwins.cs":::
-namespace IotHubtoTwins
-{
- public class IoTHubtoTwins
- {
- private static readonly string adtInstanceUrl = Environment.GetEnvironmentVariable("ADT_SERVICE_URL");
- private static readonly HttpClient httpClient = new HttpClient();
-
- [FunctionName("IoTHubtoTwins")]
- public async void Run([EventGridTrigger] EventGridEvent eventGridEvent, ILogger log)
- {
- if (adtInstanceUrl == null) log.LogError("Application setting \"ADT_SERVICE_URL\" not set");
-
- try
- {
- //Authenticate with Digital Twins
- ManagedIdentityCredential cred = new ManagedIdentityCredential("https://digitaltwins.azure.net");
- DigitalTwinsClient client = new DigitalTwinsClient(
- new Uri(adtInstanceUrl), cred, new DigitalTwinsClientOptions
- { Transport = new HttpClientTransport(httpClient) });
- log.LogInformation($"ADT service client connection created.");
-
- if (eventGridEvent != null && eventGridEvent.Data != null)
- {
- log.LogInformation(eventGridEvent.Data.ToString());
-
- // Reading deviceId and temperature for IoT Hub JSON
- JObject deviceMessage = (JObject)JsonConvert.DeserializeObject(eventGridEvent.Data.ToString());
- string deviceId = (string)deviceMessage["systemProperties"]["iothub-connection-device-id"];
- var temperature = deviceMessage["body"]["Temperature"];
-
- log.LogInformation($"Device:{deviceId} Temperature is:{temperature}");
-
- //Update twin using device temperature
- var updateTwinData = new JsonPatchDocument();
- updateTwinData.AppendReplace("/Temperature", temperature.Value<double>());
- await client.UpdateDigitalTwinAsync(deviceId, updateTwinData);
- }
- }
- catch (Exception e)
- {
- log.LogError($"Error in ingest function: {e.Message}");
- }
- }
- }
-}
-```
-Save your function code and publish the function App to Azure.
-You can do this by referring to [*Publish the Function App*](./how-to-create-azure-function.md#publish-the-function-app-to-azure) section of [*How-to: Set up an Azure function for processing data*](how-to-create-azure-function.md).
+Save your function code and publish the function app to Azure. To learn how, see [*Publish the function app*](./how-to-create-azure-function.md#publish-the-function-app-to-azure) in [*How to set up a function in Azure to process data*](how-to-create-azure-function.md).
After a successful publish, you will see the output in the Visual Studio command window as shown below:
@@ -212,7 +130,7 @@ You can also verify your status of the publish process in the [Azure portal](htt
## Connect your function to IoT Hub Set up an event destination for hub data.
-In the [Azure portal](https://portal.azure.com/), navigate to your IoT Hub instance that you created in the [*Prerequisites*](#prerequisites) section. Under **Events**, create a subscription for your Azure function.
+In the [Azure portal](https://portal.azure.com/), navigate to your IoT Hub instance that you created in the [*Prerequisites*](#prerequisites) section. Under **Events**, create a subscription for your function.
:::image type="content" source="media/how-to-ingest-iot-hub-data/add-event-subscription.png" alt-text="Screenshot of the Azure portal that shows Adding an event subscription.":::
@@ -220,7 +138,7 @@ In the **Create Event Subscription** page, fill the fields as follows:
1. Under **Name**, name the subscription what you would like. 2. Under **Event Schema**, choose _Event Grid Schema_. 3. Under **Event Types**, choose the _Device Telemetry_ checkbox and uncheck other event types.
- 4. Under **Endpoint Type**, Select _Azure function_.
+ 4. Under **Endpoint Type**, Select _Azure Function_.
5. Under **Endpoint**, Choose _Select an endpoint_ link to create an endpoint. :::image type="content" source="media/how-to-ingest-iot-hub-data/create-event-subscription.png" alt-text="Screenshot of the Azure portal to create the event subscription details":::
@@ -230,11 +148,11 @@ In the _Select Azure Function_ page that opens up, verify the below details.
2. **Resource group**: Your resource group 3. **Function app**: Your function app name 4. **Slot**: _Production_
- 5. **Function**: Select your Azure function from the dropdown.
+ 5. **Function**: Select your function from the dropdown.
Save your details by selecting _Confirm Selection_ button.
-:::image type="content" source="media/how-to-ingest-iot-hub-data/select-azure-function.png" alt-text="Screenshot of the Azure portal to select Azure function":::
+:::image type="content" source="media/how-to-ingest-iot-hub-data/select-azure-function.png" alt-text="Screenshot of the Azure portal to select the function.":::
Select _Create_ button to create event subscription.
@@ -283,4 +201,4 @@ To see the value change, repeatedly run the query command above.
## Next steps Read about data ingress and egress with Azure Digital Twins:
-* [*Concepts: Integration with other services*](concepts-integration.md)
\ No newline at end of file
+* [*Concepts: Integration with other services*](concepts-integration.md)
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-azure-signalr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-azure-signalr.md
@@ -69,66 +69,8 @@ Next, start Visual Studio (or another code editor of your choice), and open the
1. Create a new C# sharp class called **SignalRFunctions.cs** in the *SampleFunctionsApp* project. 1. Replace the contents of the class file with the following code:-
- ```C#
- using System;
- using System.Threading.Tasks;
- using Microsoft.AspNetCore.Http;
- using Microsoft.Azure.EventGrid.Models;
- using Microsoft.Azure.WebJobs;
- using Microsoft.Azure.WebJobs.Extensions.Http;
- using Microsoft.Azure.WebJobs.Extensions.EventGrid;
- using Microsoft.Azure.WebJobs.Extensions.SignalRService;
- using Microsoft.Extensions.Logging;
- using Newtonsoft.Json;
- using Newtonsoft.Json.Linq;
- using System.Collections.Generic;
-
- namespace SampleFunctionsApp
- {
- public static class SignalRFunctions
- {
- public static double temperature;
-
- [FunctionName("negotiate")]
- public static SignalRConnectionInfo GetSignalRInfo(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequest req,
- [SignalRConnectionInfo(HubName = "dttelemetry")] SignalRConnectionInfo connectionInfo)
- {
- return connectionInfo;
- }
-
- [FunctionName("broadcast")]
- public static Task SendMessage(
- [EventGridTrigger] EventGridEvent eventGridEvent,
- [SignalR(HubName = "dttelemetry")] IAsyncCollector<SignalRMessage> signalRMessages,
- ILogger log)
- {
- JObject eventGridData = (JObject)JsonConvert.DeserializeObject(eventGridEvent.Data.ToString());
- log.LogInformation($"Event grid message: {eventGridData}");
-
- var patch = (JObject)eventGridData["data"]["patch"][0];
- if (patch["path"].ToString().Contains("/Temperature"))
- {
- temperature = Math.Round(patch["value"].ToObject<double>(), 2);
- }
-
- var message = new Dictionary<object, object>
- {
- { "temperatureInFahrenheit", temperature},
- };
-
- return signalRMessages.AddAsync(
- new SignalRMessage
- {
- Target = "newMessage",
- Arguments = new[] { message }
- });
- }
- }
- }
- ```
+ :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/signalRFunction.cs":::
1. In Visual Studio's *Package Manager Console* window, or any command window on your machine in the *Azure_Digital_Twins_end_to_end_samples\AdtSampleApp\SampleFunctionsApp* folder, run the following command to install the `SignalRService` NuGet package to the project: ```cmd
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-maps.md
@@ -78,60 +78,7 @@ See the following document for reference info: [*Azure Event Grid trigger for Az
Replace the function code with the following code. It will filter out only updates to space twins, read the updated temperature, and send that information to Azure Maps.
-```C#
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Extensions.EventGrid;
-using Microsoft.Extensions.Logging;
-using Newtonsoft.Json;
-using Newtonsoft.Json.Linq;
-using System;
-using System.Threading.Tasks;
-using System.Net.Http;
-
-namespace SampleFunctionsApp
-{
- public static class ProcessDTUpdatetoMaps
- { //Read maps credentials from application settings on function startup
- private static string statesetID = Environment.GetEnvironmentVariable("statesetID");
- private static string subscriptionKey = Environment.GetEnvironmentVariable("subscription-key");
- private static HttpClient httpClient = new HttpClient();
-
- [FunctionName("ProcessDTUpdatetoMaps")]
- public static async Task Run([EventGridTrigger]EventGridEvent eventGridEvent, ILogger log)
- {
- JObject message = (JObject)JsonConvert.DeserializeObject(eventGridEvent.Data.ToString());
- log.LogInformation("Reading event from twinID:" + eventGridEvent.Subject.ToString() + ": " +
- eventGridEvent.EventType.ToString() + ": " + message["data"]);
-
- //Parse updates to "space" twins
- if (message["data"]["modelId"].ToString() == "dtmi:contosocom:DigitalTwins:Space;1")
- { //Set the ID of the room to be updated in your map.
- //Replace this line with your logic for retrieving featureID.
- string featureID = "UNIT103";
-
- //Iterate through the properties that have changed
- foreach (var operation in message["data"]["patch"])
- {
- if (operation["op"].ToString() == "replace" && operation["path"].ToString() == "/Temperature")
- { //Update the maps feature stateset
- var postcontent = new JObject(new JProperty("States", new JArray(
- new JObject(new JProperty("keyName", "temperature"),
- new JProperty("value", operation["value"].ToString()),
- new JProperty("eventTimestamp", DateTime.Now.ToString("s"))))));
-
- var response = await httpClient.PostAsync(
- $"https://atlas.microsoft.com/featureState/state?api-version=1.0&statesetID={statesetID}&featureID={featureID}&subscription-key={subscriptionKey}",
- new StringContent(postcontent.ToString()));
-
- log.LogInformation(await response.Content.ReadAsStringAsync());
- }
- }
- }
- }
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/updateMaps.cs":::
You'll need to set two environment variables in your function app. One is your [Azure Maps primary subscription key](../azure-maps/quick-demo-map-app.md#get-the-primary-key-for-your-account), and one is your [Azure Maps stateset ID](../azure-maps/tutorial-creator-indoor-maps.md#create-a-feature-stateset).
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-models.md
@@ -57,54 +57,7 @@ The following table is an example of how RDFS and OWL constructs can be mapped t
The following C# code snippet shows how an RDF model file is loaded into a graph and converted to DTDL, using the [**dotNetRDF**](https://www.dotnetrdf.org/) library.
-```csharp
-using VDS.RDF.Ontology;
-using VDS.RDF.Parsing;
-using Microsoft.Azure.DigitalTwins.Parser;
-
-//...
-
-Console.WriteLine("Reading file...");
-
-FileLoader.Load(_ontologyGraph, rdfFile.FullName);
-
-// Start looping through for each owl:Class
-foreach (OntologyClass owlClass in _ontologyGraph.OwlClasses)
-{
-
- // Generate a DTMI for the owl:Class
- string Id = GenerateDTMI(owlClass);
-
- if (!String.IsNullOrEmpty(Id))
- {
-
- Console.WriteLine($"{owlClass.ToString()} -> {Id}");
-
- // Create Interface
- DtdlInterface dtdlInterface = new DtdlInterface
- {
- Id = Id,
- Type = "Interface",
- DisplayName = GetInterfaceDisplayName(owlClass),
- Comment = GetInterfaceComment(owlClass),
- Contents = new List<DtdlContents>()
- };
-
- // Use DTDL 'extends' for super classes
- IEnumerable<OntologyClass> foundSuperClasses = owlClass.DirectSuperClasses;
-
- //...
- }
-
- // Add interface to the list of interfaces
- _interfaceList.Add(dtdlInterface);
-}
-
-// Serialize to JSON
-var json = JsonConvert.SerializeObject(_interfaceList);
-
-//...
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/convertRDF.cs":::
### Sample converter application
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-time-series-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
@@ -95,51 +95,7 @@ For more information about using Event Hubs with Azure functions, see [*Azure Ev
Inside your published function app, replace the function code with the following code.
-```C#
-using Microsoft.Azure.EventHubs;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Extensions.Logging;
-using Newtonsoft.Json;
-using Newtonsoft.Json.Linq;
-using System.Threading.Tasks;
-using System.Text;
-using System.Collections.Generic;
-
-namespace SampleFunctionsApp
-{
- public static class ProcessDTUpdatetoTSI
- {
- [FunctionName("ProcessDTUpdatetoTSI")]
- public static async Task Run(
- [EventHubTrigger("twins-event-hub", Connection = "EventHubAppSetting-Twins")]EventData myEventHubMessage,
- [EventHub("tsi-event-hub", Connection = "EventHubAppSetting-TSI")]IAsyncCollector<string> outputEvents,
- ILogger log)
- {
- JObject message = (JObject)JsonConvert.DeserializeObject(Encoding.UTF8.GetString(myEventHubMessage.Body));
- log.LogInformation("Reading event:" + message.ToString());
-
- // Read values that are replaced or added
- Dictionary<string, object> tsiUpdate = new Dictionary<string, object>();
- foreach (var operation in message["patch"]) {
- if (operation["op"].ToString() == "replace" || operation["op"].ToString() == "add")
- {
- //Convert from JSON patch path to a flattened property for TSI
- //Example input: /Front/Temperature
- // output: Front.Temperature
- string path = operation["path"].ToString().Substring(1);
- path = path.Replace("/", ".");
- tsiUpdate.Add(path, operation["value"]);
- }
- }
- //Send an update if updates exist
- if (tsiUpdate.Count>0){
- tsiUpdate.Add("$dtId", myEventHubMessage.Properties["cloudEvents:subject"]);
- await outputEvents.AddAsync(JsonConvert.SerializeObject(tsiUpdate));
- }
- }
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/updateTSI.cs":::
From here, the function will then send the JSON objects it creates to a second event hub, which you will connect to Time Series Insights.
@@ -203,14 +159,14 @@ Next, you'll need to set environment variables in your function app from earlier
Next, you will set up a Time Series Insights instance to receive the data from your second event hub. Follow the steps below, and for more details about this process, see [*Tutorial: Set up an Azure Time Series Insights Gen2 PAYG environment*](../time-series-insights/tutorials-set-up-tsi-environment.md). 1. In the Azure portal, begin creating a Time Series Insights resource.
- 1. Select the **PAYG(Preview)** pricing tier.
+ 1. Select the **Gen2(L1)** pricing tier.
2. You will need to choose a **time series ID** for this environment. Your time series ID can be up to three values that you will use to search for your data in Time Series Insights. For this tutorial, you can use **$dtId**. Read more about selecting an ID value in [*Best practices for choosing a Time Series ID*](../time-series-insights/how-to-select-tsid.md).
- :::image type="content" source="media/how-to-integrate-time-series-insights/create-twin-id.png" alt-text="The creation portal UX for a Time Series Insights environment. The PAYG(Preview) pricing tier is selected and the time series ID property name is $dtId":::
+ :::image type="content" source="media/how-to-integrate-time-series-insights/create-twin-id.png" alt-text="The creation portal UX for a Time Series Insights environment. The Gen2(L1) pricing tier is selected and the time series ID property name is $dtId" lightbox="media/how-to-integrate-time-series-insights/create-twin-id.png":::
2. Select **Next: Event Source** and select your Event Hubs information from above. You will also need to create a new Event Hubs consumer group.
- :::image type="content" source="media/how-to-integrate-time-series-insights/event-source-twins.png" alt-text="The creation portal UX for a Time Series Insights environment event source. You are creating an event source with the event hub information from above. You are also creating a new consumer group.":::
+ :::image type="content" source="media/how-to-integrate-time-series-insights/event-source-twins.png" alt-text="The creation portal UX for a Time Series Insights environment event source. You are creating an event source with the event hub information from above. You are also creating a new consumer group." lightbox="media/how-to-integrate-time-series-insights/event-source-twins.png":::
## Begin sending IoT data to Azure Digital Twins
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-interpret-event-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-interpret-event-data.md
@@ -261,22 +261,9 @@ Here are the fields in the body of a digital twin change notification.
The body for the `Twin.Update` notification is a JSON Patch document containing the update to the digital twin.
-For example, say that a digital twin was updated using the following Patch.
+For example, say that a digital twin was updated using the following patch.
-```json
-[
- {
- "op": "replace",
- "value": 40,
- "path": "/Temperature"
- },
- {
- "op": "add",
- "value": 30,
- "path": "/comp1/prop1"
- }
-]
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/patch-component-2.json":::
The corresponding notification (if synchronously executed by the service, such as Azure Digital Twins updating a digital twin) would have a body like:
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
@@ -52,33 +52,12 @@ For example, for the twin *foo*, each specific relationship ID must be unique. H
The following code sample illustrates how to create a relationship in your Azure Digital Twins instance.
-```csharp
-public async static Task CreateRelationship(DigitalTwinsClient client, string srcId, string targetId, string relName)
- {
- var relationship = new BasicRelationship
- {
- TargetId = targetId,
- Name = relName
- };
-
- try
- {
- string relId = $"{srcId}-{relName}->{targetId}";
- await client.CreateOrReplaceRelationshipAsync<BasicRelationship>(srcId, relId, relationship);
- Console.WriteLine($"Created {relName} relationship successfully");
- }
- catch (RequestFailedException rex)
- {
- Console.WriteLine($"Create relationship error: {rex.Status}:{rex.Message}");
- }
-
- }
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="CreateRelationshipMethod":::
+ In your main method, you can now call the `CreateRelationship()` function to create a _contains_ relationship like this:
-```csharp
-await CreateRelationship(client, srcId, targetId, "contains");
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseCreateRelationship":::
+ If you wish to create multiple relationships, you can repeat calls to the same method, passing different relationship types into the argument. For more information on the helper class `BasicRelationship`, see [*How-to: Use the Azure Digital Twins APIs and SDKs*](how-to-use-apis-sdks.md#serialization-helpers).
@@ -100,44 +79,18 @@ You can even create multiple instances of the same type of relationship between
To access the list of **outgoing** relationships for a given twin in the graph, you can use the `GetRelationships()` method like this:
-```csharp
-await client.GetRelationships()
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="GetRelationshipsCall":::
This returns an `Azure.Pageable<T>` or `Azure.AsyncPageable<T>`, depending on whether you use the synchronous or asynchronous version of the call. Here is an example that retrieves a list of relationships:
-```csharp
-public static async Task<List<BasicRelationship>> FindOutgoingRelationshipsAsync(DigitalTwinsClient client, string dtId)
- {
- // Find the relationships for the twin
- try
- {
- // GetRelationshipsAsync will throw if an error occurs
- AsyncPageable<BasicRelationship> rels = client.GetRelationshipsAsync<BasicRelationship>(dtId);
- List<BasicRelationship> results = new List<BasicRelationship>();
- await foreach (BasicRelationship rel in rels)
- {
- results.Add(rel);
- Console.WriteLine($"Found relationship-{rel.Name}->{rel.TargetId}");
- }
-
- return results;
- }
- catch (RequestFailedException ex)
- {
- Console.WriteLine($"*** Error {ex.Status}/{ex.ErrorCode} retrieving relationships for {dtId} due to {ex.Message}");
- return null;
- }
- }
-
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="FindOutgoingRelationshipsMethod":::
+ You can now call this method to see the outgoing relationships of the twins like this:
-```csharp
-await FindOutgoingRelationshipsAsync(client, twin_Id);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseFindOutgoingRelationships":::
+ You can use the retrieved relationships to navigate to other twins in your graph. To do this, read the `target` field from the relationship that is returned, and use it as the ID for your next call to `GetDigitalTwin()`. ### Find incoming relationships to a digital twin
@@ -148,84 +101,31 @@ The previous code sample was focused on finding outgoing relationships from a tw
Note that the `IncomingRelationship` calls don't return the full body of the relationship.
-```csharp
-public static async Task<List<IncomingRelationship>> FindIncomingRelationshipsAsync(DigitalTwinsClient client, string dtId)
- {
- // Find the relationships for the twin
- try
- {
- // GetRelationshipsAsync will throw an error if a problem occurs
- AsyncPageable<IncomingRelationship> incomingRels = client.GetIncomingRelationshipsAsync(dtId);
-
- List<IncomingRelationship> results = new List<IncomingRelationship>();
- await foreach (IncomingRelationship incomingRel in incomingRels)
- {
- results.Add(incomingRel);
- Console.WriteLine($"Found incoming relationship-{incomingRel.RelationshipId}");
-
- }
- return results;
- }
- catch (RequestFailedException ex)
- {
- Console.WriteLine($"*** Error {ex.Status}/{ex.ErrorCode} retrieving incoming relationships for {dtId} due to {ex.Message}");
- return null;
- }
- }
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="FindIncomingRelationshipsMethod":::
You can now call this method to see the incoming relationships of the twins like this:
-```csharp
-await FindIncomingRelationshipsAsync(client, twin_Id);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseFindIncomingRelationships":::
+ ### List all twin properties and relationships Using the above methods for listing outgoing and incoming relationships to a twin, you can create a method that prints full twin information, including the twin's properties and both types of its relationships. Here is an example method, called `FetchAndPrintTwinAsync()`, showing how to do this.
-```csharp
-private static async Task FetchAndPrintTwinAsync(DigitalTwinsClient client, string twin_Id)
- {
- BasicDigitalTwin twin;
- Response<BasicDigitalTwin> res = client.GetDigitalTwin(twin_Id);
-
- await FindOutgoingRelationshipsAsync(client, twin_Id);
- await FindIncomingRelationshipsAsync(client, twin_Id);
-
- return;
- }
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="FetchAndPrintMethod":::
You can now call this function in your main method like this:
-```csharp
-await FetchAndPrintTwinAsync(client, targetId);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseFetchAndPrint":::
+ ## Delete relationships The first parameter specifies the source twin (the twin where the relationship originates). The other parameter is the relationship ID. You need both the twin ID and the relationship ID, because relationship IDs are only unique within the scope of a twin.
-```csharp
-private static async Task DeleteRelationship(DigitalTwinsClient client, string srcId, string relId)
- {
- try
- {
- Response response = await client.DeleteRelationshipAsync(srcId, relId);
- await FetchAndPrintTwinAsync(srcId, client);
- Console.WriteLine("Deleted relationship successfully");
- }
- catch (RequestFailedException Ex)
- {
- Console.WriteLine($"Error {Ex.ErrorCode}");
- }
- }
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="DeleteRelationshipMethod":::
You can now call this method to delete a relationship like this:
-```csharp
-await DeleteRelationship(client, srcId, relId);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseDeleteRelationship":::
## Runnable twin graph sample
@@ -238,11 +138,9 @@ The snippet uses the [*Room.json*](https://github.com/Azure-Samples/digital-twin
Before you run the sample, do the following: 1. Download the model files, place them in your project, and replace the `<path-to>` placeholders in the code below to tell your program where to find them. 2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's hostname.
-3. Add these packages to your project:
- ```cmd/sh
- dotnet add package Azure.DigitalTwins.Core --version 1.0.0-preview.3
- dotnet add package Azure.identity
- ```
+3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
+ * [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
+ * [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
You'll also need to set up local credentials if you want to run the sample directly. The next section walks through this. [!INCLUDE [Azure Digital Twins: local credentials prereq (outer)](../../includes/digital-twins-local-credentials-outer.md)]
@@ -251,198 +149,7 @@ You'll also need to set up local credentials if you want to run the sample direc
After completing the above steps, you can directly run the following sample code.
-```csharp
-using System;
-using Azure.DigitalTwins.Core;
-using Azure.Identity;
-using System.Threading.Tasks;
-using System.IO;
-using System.Collections.Generic;
-using Azure;
-using Azure.DigitalTwins.Core.Serialization;
-using System.Text.Json;
-
-namespace minimal
-{
- class Program
- {
-
- public static async Task Main(string[] args)
- {
- Console.WriteLine("Hello World!");
-
- //Create the Azure Digital Twins client for API calls
- DigitalTwinsClient client = createDTClient();
- Console.WriteLine($"Service client created ΓÇô ready to go");
- Console.WriteLine();
-
- //Upload models
- Console.WriteLine($"Upload models");
- Console.WriteLine();
- string dtdl = File.ReadAllText("<path-to>/Room.json");
- string dtdl1 = File.ReadAllText("<path-to>/Floor.json");
- var typeList = new List<string>();
- typeList.Add(dtdl);
- typeList.Add(dtdl1);
- // Upload the models to the service
- await client.CreateModelsAsync(typeList);
-
- //Create new (Floor) digital twin
- BasicDigitalTwin floorTwin = new BasicDigitalTwin();
- string srcId = "myFloorID";
- floorTwin.Metadata = new DigitalTwinMetadata();
- floorTwin.Metadata.ModelId = "dtmi:example:Floor;1";
- //Floor twins have no properties, so nothing to initialize
- //Create the twin
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>(srcId, floorTwin);
- Console.WriteLine("Twin created successfully");
-
- //Create second (Room) digital twin
- BasicDigitalTwin roomTwin = new BasicDigitalTwin();
- string targetId = "myRoomID";
- roomTwin.Metadata = new DigitalTwinMetadata();
- roomTwin.Metadata.ModelId = "dtmi:example:Room;1";
- // Initialize properties
- Dictionary<string, object> props = new Dictionary<string, object>();
- props.Add("Temperature", 35.0);
- props.Add("Humidity", 55.0);
- roomTwin.Contents = props;
- //Create the twin
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>(targetId, roomTwin);
-
- //Create relationship between them
- await CreateRelationship(client, srcId, targetId, "contains");
- Console.WriteLine();
-
- //Print twins and their relationships
- Console.WriteLine("--- Printing details:");
- Console.WriteLine("Outgoing relationships from source twin:");
- await FetchAndPrintTwinAsync(srcId, client);
- Console.WriteLine();
- Console.WriteLine("Incoming relationships to target twin:");
- await FetchAndPrintTwinAsync(targetId, client);
- Console.WriteLine("--------");
- Console.WriteLine();
-
- //Delete the relationship
- Console.WriteLine("Deleting the relationship");
- await DeleteRelationship(client, srcId, $"{srcId}-contains->{targetId}");
- Console.WriteLine();
-
- //Print twins and their relationships again
- Console.WriteLine("--- Printing details:");
- Console.WriteLine("Outgoing relationships from source twin:");
- await FetchAndPrintTwinAsync(srcId, client);
- Console.WriteLine();
- Console.WriteLine("Incoming relationships to target twin:");
- await FetchAndPrintTwinAsync(targetId, client);
- Console.WriteLine("--------");
- Console.WriteLine();
- }
-
- private static DigitalTwinsClient createDTClient()
- {
- string adtInstanceUrl = "https://<your-instance-hostname>";
- var credentials = new DefaultAzureCredential();
- DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credentials);
- return client;
- }
- private async static Task CreateRelationship(DigitalTwinsClient client, string srcId, string targetId, string relName)
- {
- // Create relationship between twins
- var relationship = new BasicRelationship
- {
- TargetId = targetId,
- Name = relName
- };
-
- try
- {
- string relId = $"{srcId}-{relName}->{targetId}";
- await client.CreateOrReplaceRelationshipAsync<BasicRelationship>(srcId, relId, relationship);
- Console.WriteLine($"Created {relName} relationship successfully");
- }
- catch (RequestFailedException rex)
- {
- Console.WriteLine($"Create relationship error: {rex.Status}:{rex.Message}");
- }
-
- }
-
- private static async Task FetchAndPrintTwinAsync(string twin_Id, DigitalTwinsClient client)
- {
- BasicDigitalTwin twin;
- Response<BasicDigitalTwin> res = client.GetDigitalTwin(twin_Id);
- await FindOutgoingRelationshipsAsync(client, twin_Id);
- await FindIncomingRelationshipsAsync(client, twin_Id);
-
- return;
- }
-
- private static async Task<List<BasicRelationship>> FindOutgoingRelationshipsAsync(DigitalTwinsClient client, string dtId)
- {
- // Find the relationships for the twin
-
- try
- {
- // GetRelationshipsAsync will throw if an error occurs
- AsyncPageable<BasicRelationship> rels = client.GetRelationshipsAsync<BasicRelationship>(dtId);
- List<BasicRelationship> results = new List<BasicRelationship>();
- await foreach (BasicRelationship rel in rels)
- {
- results.Add(rel);
- Console.WriteLine($"Found relationship-{rel.Name}->{rel.TargetId}");
- }
-
- return results;
- }
- catch (RequestFailedException ex)
- {
- Console.WriteLine($"*** Error {ex.Status}/{ex.ErrorCode} retrieving relationships for {dtId} due to {ex.Message}");
- return null;
- }
- }
-
- private static async Task<List<IncomingRelationship>> FindIncomingRelationshipsAsync(DigitalTwinsClient client, string dtId)
- {
- // Find the relationships for the twin
-
- try
- {
- // GetRelationshipsAsync will throw an error if a problem occurs
- AsyncPageable<IncomingRelationship> incomingRels = client.GetIncomingRelationshipsAsync(dtId);
-
- List<IncomingRelationship> results = new List<IncomingRelationship>();
- await foreach (IncomingRelationship incomingRel in incomingRels)
- {
- results.Add(incomingRel);
- Console.WriteLine($"Found incoming relationship-{incomingRel.RelationshipId}");
- }
- return results;
- }
- catch (RequestFailedException ex)
- {
- Console.WriteLine($"*** Error {ex.Status}/{ex.ErrorCode} retrieving incoming relationships for {dtId} due to {ex.Message}");
- return null;
- }
- }
-
- private static async Task DeleteRelationship(DigitalTwinsClient client, string srcId, string relId)
- {
- try
- {
- Response response = await client.DeleteRelationshipAsync(srcId, relId);
- await FetchAndPrintTwinAsync(srcId, client);
- Console.WriteLine("Deleted relationship successfully");
- }
- catch (RequestFailedException Ex)
- {
- Console.WriteLine($"Error {Ex.ErrorCode}");
- }
- }
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs":::
Here is the console output of the above program:
@@ -468,121 +175,7 @@ One way to get this data into Azure Digital Twins is to convert the table to a C
In the code below, the CSV file is called *data.csv*, and there is a placeholder representing the **hostname** of your Azure Digital Twins instance. The sample also makes use of several packages that you can add to your project to help with this process.
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Text.Json;
-using System.Threading.Tasks;
-using Azure;
-using Azure.DigitalTwins.Core;
-using Azure.Identity;
-
-namespace creating_twin_graph_from_csv
-{
- class Program
- {
- static async Task Main(string[] args)
- {
- List<BasicRelationship> RelationshipRecordList = new List<BasicRelationship>();
- List<BasicDigitalTwin> TwinList = new List<BasicDigitalTwin>();
- List<List<string>> data = ReadData();
- DigitalTwinsClient client = createDTClient();
-
- // Interpret the CSV file data, by each row
- foreach (List<string> row in data)
- {
- string modelID = row.Count > 0 ? row[0].Trim() : null;
- string srcID = row.Count > 1 ? row[1].Trim() : null;
- string relName = row.Count > 2 ? row[2].Trim() : null;
- string targetID = row.Count > 3 ? row[3].Trim() : null;
- string initProperties = row.Count > 4 ? row[4].Trim() : null;
- Console.WriteLine($"ModelID: {modelID}, TwinID: {srcID}, RelName: {relName}, TargetID: {targetID}, InitData: {initProperties}");
- Dictionary<string, object> props = new Dictionary<string, object>();
- // Parse properties into dictionary (left out for compactness)
- // ...
-
- // Null check for source and target ID's
- if (srcID != null && srcID.Length > 0 && targetID != null && targetID.Length > 0)
- {
- BasicRelationship br = new BasicRelationship()
- {
- SourceId = srcID,
- TargetId = targetID,
- Name = relName
- };
- RelationshipRecordList.Add(br);
- }
- BasicDigitalTwin srcTwin = new BasicDigitalTwin();
- srcTwin.Id = srcID;
- srcTwin.Metadata = new DigitalTwinMetadata();
- srcTwin.Metadata.ModelId = modelID;
- srcTwin.Contents = props;
- TwinList.Add(srcTwin);
- }
-
- // Create digital twins 
- foreach (BasicDigitalTwin twin in TwinList)
- {
- try
- {
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>(twin.Id, twin);
- Console.WriteLine("Twin is created");
- }
- catch (RequestFailedException e)
- {
- Console.WriteLine($"Error {e.Status}: {e.Message}");
- }
- }
-            // Create relationships between the twins
-            foreach (BasicRelationship rec in RelationshipRecordList)
- {
- try
- {
- string relId = $"{rec.SourceId}-{rec.Name}->{rec.TargetId}";
- await client.CreateOrReplaceRelationshipAsync<BasicRelationship>(rec.SourceId, relId, rec);
- Console.WriteLine("Relationship is created");
- }
- catch (RequestFailedException e)
- {
- Console.WriteLine($"Error {e.Status}: {e.Message}");
- }
- }
- }
-
-        // Method to ingest data from the CSV file
-        public static List<List<string>> ReadData()
- {
- string path = "<path-to>/data.csv";
- string[] lines = System.IO.File.ReadAllLines(path);
- List<List<string>> data = new List<List<string>>();
- int count = 0;
- foreach (string line in lines)
- {
- if (count++ == 0)
- continue;
- List<string> cols = new List<string>();
- data.Add(cols);
- string[] columns = line.Split(',');
- foreach (string column in columns)
- {
- cols.Add(column);
- }
- }
- return data;
- }
-        // Method to create the digital twins client
-        private static DigitalTwinsClient createDTClient()
- {
-
- string adtInstanceUrl = "https://<your-instance-hostname>";
- var credentials = new DefaultAzureCredential();
- DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credentials);
- return client;
- }
- }
-}
-
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graphFromCSV.cs":::
## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
@@ -37,35 +37,7 @@ Consider an example in which a hospital wants to digitally represent their rooms
The first step towards the solution is to create models to represent aspects of the hospital. A patient room in this scenario might be described like this:
-```json
-{
- "@id": "dtmi:com:contoso:PatientRoom;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2",
- "displayName": "Patient Room",
- "contents": [
- {
- "@type": "Property",
- "name": "visitorCount",
- "schema": "double"
- },
- {
- "@type": "Property",
- "name": "handWashCount",
- "schema": "double"
- },
- {
- "@type": "Property",
- "name": "handWashPercentage",
- "schema": "double"
- },
- {
- "@type": "Relationship",
- "name": "hasDevices"
- }
- ]
-}
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/PatientRoom.json":::
> [!NOTE] > This is a sample body for a .json file in which a model is defined and saved, to be uploaded as part of a client project. The REST API call, on the other hand, takes an array of model definitions like the one above (which is mapped to a `IEnumerable<string>` in the .NET SDK). So to use this model in the REST API directly, surround it with brackets.
@@ -87,48 +59,16 @@ Once models are created, you can upload them to the Azure Digital Twins instance
When you're ready to upload a model, you can use the following code snippet:
-```csharp
-// 'client' is an instance of DigitalTwinsClient
-// Read model file into string (not part of SDK)
-StreamReader r = new StreamReader("MyModelFile.json");
-string dtdl = r.ReadToEnd(); r.Close();
-string[] dtdls = new string[] { dtdl };
-client.CreateModels(dtdls);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModel":::
Observe that the `CreateModels` method accepts multiple files in one single transaction. Here's a sample to illustrate:
-```csharp
-var dtdlFiles = Directory.EnumerateFiles(sourceDirectory, "*.json");
-
-List<string> dtdlStrings = new List<string>();
-foreach (string fileName in dtdlFiles)
-{
- // Read model file into string (not part of SDK)
- StreamReader r = new StreamReader(fileName);
- string dtdl = r.ReadToEnd(); r.Close();
- dtdlStrings.Add(dtdl);
-}
-client.CreateModels(dtdlStrings);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModels_multi":::
Model files can contain more than a single model. In this case, the models need to be placed in a JSON array. For example:
-```json
-[
- {
- "@id": "dtmi:com:contoso:Planet",
- "@type": "Interface",
- //...
- },
- {
- "@id": "dtmi:com:contoso:Moon",
- "@type": "Interface",
- //...
- }
-]
-```
-
+:::code language="json" source="~/digital-twins-docs-samples/models/Planet-Moon.json":::
+ On upload, model files are validated by the service. ## Retrieve models
@@ -142,18 +82,7 @@ Here are your options for this:
Here are some example calls:
-```csharp
-// 'client' is a valid DigitalTwinsClient object
-
-// Get a single model, metadata and data
-DigitalTwinsModelData md1 = client.GetModel(id);
-
-// Get a list of the metadata of all available models
-Pageable<DigitalTwinsModelData> pmd2 = client.GetModels();
-
-// Get models and metadata for a model ID, including all dependencies (models that it inherits from, components it references)
-Pageable<DigitalTwinsModelData> pmd3 = client.GetModels(new GetModelsOptions { IncludeModelDefinition = true });
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="GetModels":::
The API calls to retrieve models all return `DigitalTwinsModelData` objects. `DigitalTwinsModelData` contains metadata about the model stored in the Azure Digital Twins instance, such as name, DTMI, and creation date of the model. The `DigitalTwinsModelData` object also optionally includes the model itself. Depending on parameters, you can thus use the retrieve calls to either retrieve just metadata (which is useful in scenarios where you want to display a UI list of available tools, for example), or the entire model.
@@ -209,12 +138,7 @@ These are separate features and they do not impact each other, although they may
Here is the code to decommission a model:
-```csharp
-// 'client' is a valid DigitalTwinsClient
-client.DecommissionModel(dtmiOfPlanetInterface);
-// Write some code that deletes or transitions digital twins
-//...
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DecommissionModel":::
A model's decommissioning status is included in the `ModelData` records returned by the model retrieval APIs.
@@ -245,10 +169,8 @@ Even if a model meets the requirements to delete it immediately, you may want to
6. Delete the model To delete a model, use this call:
-```csharp
-// 'client' is a valid DigitalTwinsClient
-await client.DeleteModelAsync(IDToDelete);
-```
+
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DeleteModel":::
#### After deletion: Twins without models
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-routes-apis-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
@@ -126,17 +126,8 @@ To create an endpoint that has dead-lettering enabled, you'll need to create the
1. Next, add a `deadLetterSecret` field to the properties object in the **body** of the request. Set this value according to the template below, which crafts a URL from the storage account name, container name, and SAS token value that you gathered in the [previous section](#set-up-storage-resources).
- ```json
- {
- "properties": {
- "endpointType": "EventGrid",
- "TopicEndpoint": "https://contosoGrid.westus2-1.eventgrid.azure.net/api/events",
- "accessKey1": "xxxxxxxxxxx",
- "accessKey2": "xxxxxxxxxxx",
- "deadLetterSecret":"https://<storageAccountname>.blob.core.windows.net/<containerName>?<SASToken>"
- }
- }
- ```
+ :::code language="json" source="~/digital-twins-docs-samples/api-requests/deadLetterEndpoint.json":::
+ 1. Send the request to create the endpoint. For more information on structuring this request, see the Azure Digital Twins REST API documentation: [Endpoints - DigitalTwinsEndpoint CreateOrUpdate](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate).
@@ -205,11 +196,7 @@ One route should allow multiple notifications and event types to be selected.
`CreateOrReplaceEventRouteAsync` is the SDK call that is used to add an event route. Here is an example of its usage:
-```csharp
-string eventFilter = "$eventType = 'DigitalTwinTelemetryMessages' or $eventType = 'DigitalTwinLifecycleNotification'";
-var er = new DigitalTwinsEventRoute("<your-endpointName>", eventFilter);
-await client.CreateOrReplaceEventRouteAsync("routeName", er);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/eventRoute_operations.cs" id="CreateEventRoute":::
> [!TIP] > All SDK functions come in synchronous and asynchronous versions.
@@ -217,35 +204,8 @@ await client.CreateOrReplaceEventRouteAsync("routeName", er);
### Event route sample code The following sample method shows how to create, list, and delete an event route:
-```csharp
-private async static Task CreateEventRoute(DigitalTwinsClient client, String routeName, DigitalTwinsEventRoute er)
-{
- try
- {
- Console.WriteLine("Create a route: testRoute1");
-
- // Make a filter that passes everything
- er.Filter = "true";
- await client.CreateOrReplaceEventRouteAsync(routeName, er);
- Console.WriteLine("Create route succeeded. Now listing routes:");
- Pageable<DigitalTwinsEventRoute> result = client.GetEventRoutes();
- foreach (DigitalTwinsEventRoute r in result)
- {
- Console.WriteLine($"Route {r.Id} to endpoint {r.EndpointName} with filter {r.Filter} ");
- }
- Console.WriteLine("Deleting routes:");
- foreach (DigitalTwinsEventRoute r in result)
- {
- Console.WriteLine($"Deleting route {r.Id}:");
- client.DeleteEventRoute(r.Id);
- }
- }
- catch (RequestFailedException e)
- {
- Console.WriteLine($"*** Error in event route processing ({e.ErrorCode}):\n${e.Message}");
- }
- }
-```
+
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/eventRoute_operations.cs" id="FullEventRouteSample":::
## Filter events
@@ -258,12 +218,8 @@ You can restrict the events being sent by adding a **filter** for an endpoint to
To add a filter, you can use a PUT request to *https://{Your-azure-digital-twins-hostname}/eventRoutes/{event-route-name}?api-version=2020-10-31* with the following body:
-```json
-{
- "endpointName": "<endpoint-name>",
- "filter": "<filter-text>"
-}
-```
+:::code language="json" source="~/digital-twins-docs-samples/api-requests/filter.json":::
+ Here are the supported route filters. Use the detail in the *Filter text schema* column to replace the `<filter-text>` placeholder in the request body above. [!INCLUDE [digital-twins-route-filters](../../includes/digital-twins-route-filters.md)]
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
@@ -36,9 +36,7 @@ This article focuses on managing digital twins; to work with relationships and t
To create a twin, you use the `CreateOrReplaceDigitalTwinAsync()` method on the service client like this:
-```csharp
-await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>("myTwinId", initData);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="CreateTwinCall":::
To create a digital twin, you need to provide: * The desired ID for the digital twin
@@ -66,25 +64,13 @@ First, you can create a data object to represent the twin and its property data.
Without the use of any custom helper classes, you can represent a twin's properties in a `Dictionary<string, object>`, where the `string` is the name of the property and the `object` is an object representing the property and its value.
-[!INCLUDE [Azure Digital Twins code: create twin](../../includes/digital-twins-code-create-twin.md)]
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="CreateTwin_noHelper":::
#### Create twins with the helper class The helper class of `BasicDigitalTwin` allows you to store property fields in a "twin" object directly. You may still want to build the list of properties using a `Dictionary<string, object>`, which can then be added to the twin object as its `CustomProperties` directly.
-```csharp
-BasicDigitalTwin twin = new BasicDigitalTwin();
-twin.Metadata = new DigitalTwinMetadata();
-twin.Metadata.ModelId = "dtmi:example:Room;1";
-// Initialize properties
-Dictionary<string, object> props = new Dictionary<string, object>();
-props.Add("Temperature", 25.0);
-props.Add("Humidity", 50.0);
-twin.Contents = props;
-
-client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>("myRoomId", twin);
-Console.WriteLine("The twin is created successfully");
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="CreateTwin_withHelper":::
>[!NOTE] > `BasicDigitalTwin` objects come with an `Id` field. You can leave this field empty, but if you do add an ID value, it needs to match the ID parameter passed to the `CreateOrReplaceDigitalTwinAsync()` call. For example:
@@ -97,20 +83,12 @@ Console.WriteLine("The twin is created successfully");
You can access the details of any digital twin by calling the `GetDigitalTwin()` method like this:
-```csharp
-object result = await client.GetDigitalTwin(id);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="GetTwinCall":::
+ This call returns twin data as a strongly-typed object type such as `BasicDigitalTwin`. `BasicDigitalTwin` is a serialization helper class included with the SDK, which will return the core twin metadata and properties in pre-parsed form. Here's an example of how to use this to view twin details:
-```csharp
-Response<BasicDigitalTwin> twin = client.GetDigitalTwin("myRoomId");
-Console.WriteLine($"Model id: {twin.Metadata.ModelId}");
-foreach (string prop in twin.Contents.Keys)
-{
- if (twin.Contents.TryGetValue(prop, out object value))
- Console.WriteLine($"Property '{prop}': {value}");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="GetTwin":::
+ Only properties that have been set at least once are returned when you retrieve a twin with the `GetDigitalTwin()` method. >[!TIP]
@@ -120,27 +98,8 @@ To retrieve multiple twins using a single API call, see the query API examples i
Consider the following model (written in [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/tree/master/DTDL)) that defines a *Moon*:
-```json
-{
- "@id": "dtmi:example:Moon;1",
- "@type": "Interface",
- "@context": "dtmi:dtdl:context;2",
- "contents": [
- {
- "@type": "Property",
- "name": "radius",
- "schema": "double",
- "writable": true
- },
- {
- "@type": "Property",
- "name": "mass",
- "schema": "double",
- "writable": true
- }
- ]
-}
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/Moon.json":::
+ The result of calling `object result = await client.GetDigitalTwinAsync("my-moon");` on a *Moon*-type twin might look like this: ```json
@@ -185,18 +144,13 @@ To view all of the digital twins in your instance, use a [query](how-to-query-gr
Here is the body of the basic query that will return a list of all digital twins in the instance:
-```sql
-SELECT *
-FROM DIGITALTWINS
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="GetAllTwins":::
## Update a digital twin To update properties of a digital twin, you write the information you want to replace in [JSON Patch](http://jsonpatch.com/) format. In this way, you can replace multiple properties at once. You then pass the JSON Patch document into an `UpdateDigitalTwin()` method:
-```csharp
-await client.UpdateDigitalTwin(id, patch);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="UpdateTwinCall":::
A patch call can update as many properties on a single twin as you'd like (even all of them). If you need to update properties across multiple twins, you'll need a separate update call for each twin.
@@ -205,27 +159,11 @@ A patch call can update as many properties on a single twin as you'd like (even
Here is an example of JSON Patch code. This document replaces the *mass* and *radius* property values of the digital twin it is applied to.
-```json
-[
- {
- "op": "replace",
- "path": "/mass",
- "value": 0.0799
- },
- {
- "op": "replace",
- "path": "/radius",
- "value": 0.800
- }
-]
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/patch.json":::
+ You can create patches using a `JsonPatchDocument` in the [SDK](how-to-use-apis-sdks.md). Here is an example.
-```csharp
-var updateTwinData = new JsonPatchDocument();
-updateTwinData.AppendAddOp("/Temperature", temperature.Value<double>());
-await client.UpdateDigitalTwinAsync(twin_Id, updateTwinData);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="UpdateTwin":::
### Update properties in digital twin components
@@ -233,15 +171,7 @@ Recall that a model may contain components, allowing it to be made up of other m
To patch properties in a digital twin's components, you can use path syntax in JSON Patch:
-```json
-[
- {
- "op": "replace",
- "path": "/mycomponentname/mass",
- "value": 0.0799
- }
-]
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/patch-component.json":::
### Update a digital twin's model
@@ -249,15 +179,7 @@ The `UpdateDigitalTwin()` function can also be used to migrate a digital twin to
For example, consider the following JSON Patch document that replaces the digital twin's metadata `$model` field:
-```json
-[
- {
- "op": "replace",
- "path": "/$metadata/$model",
- "value": "dtmi:example:foo;1"
- }
-]
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/patch-model-1.json":::
This operation will only succeed if the digital twin being modified by the patch conforms with the new model.
@@ -268,20 +190,7 @@ Consider the following example:
The patch for this situation needs to update both the model and the twin's temperature property, like this:
-```json
-[
- {
- "op": "replace",
- "path": "/$metadata/$model",
- "value": "dtmi:example:foo_new;1"
- },
- {
- "op": "add",
- "path": "/temperature",
- "value": 60
- }
-]
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/patch-model-2.json":::
### Handle conflicting update calls
@@ -302,62 +211,8 @@ You can delete twins using the `DeleteDigitalTwin()` method. However, you can on
Here is an example of the code to delete twins and their relationships:
-```csharp
-static async Task DeleteTwin(string id)
-{
- await FindAndDeleteOutgoingRelationshipsAsync(id);
- await FindAndDeleteIncomingRelationshipsAsync(id);
- try
- {
- await client.DeleteDigitalTwin(id);
- } catch (RequestFailedException exc)
- {
- Console.WriteLine($"*** Error:{exc.Error}/{exc.Message}");
- }
-}
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="DeleteTwin":::
-public async Task FindAndDeleteOutgoingRelationshipsAsync(string dtId)
-{
- // Find the relationships for the twin
-
- try
- {
- // GetRelationshipsAsync will throw an error if a problem occurs
- AsyncPageable<BasicRelationship> rels = client.GetRelationshipsAsync<BasicRelationship>(dtId);
-
- await foreach (BasicRelationship rel in rels)
- {
- await client.DeleteRelationshipAsync(dtId, rel.Id).ConfigureAwait(false);
- Log.Ok($"Deleted relationship {rel.Id} from {dtId}");
- }
- }
- catch (RequestFailedException ex)
- {
- Log.Error($"*** Error {ex.Status}/{ex.ErrorCode} retrieving or deleting relationships for {dtId} due to {ex.Message}");
- }
-}
-
-async Task FindAndDeleteIncomingRelationshipsAsync(string dtId)
-{
- // Find the relationships for the twin
-
- try
- {
- // GetRelationshipsAsync will throw an error if a problem occurs
- AsyncPageable<IncomingRelationship> incomingRels = client.GetIncomingRelationshipsAsync(dtId);
-
- await foreach (IncomingRelationship incomingRel in incomingRels)
- {
- await client.DeleteRelationshipAsync(incomingRel.SourceId, incomingRel.RelationshipId).ConfigureAwait(false);
- Log.Ok($"Deleted incoming relationship {incomingRel.RelationshipId} from {dtId}");
- }
- }
- catch (RequestFailedException ex)
- {
- Log.Error($"*** Error {ex.Status}/{ex.ErrorCode} retrieving or deleting incoming relationships for {dtId} due to {ex.Message}");
- }
-}
-```
### Delete all digital twins For an example of how to delete all twins at once, download the sample app used in the [*Tutorial: Explore the basics with a sample client app*](tutorial-command-line-app.md). The *CommandLoop.cs* file does this in a `CommandDeleteAllTwins()` function.
@@ -373,11 +228,9 @@ The snippet uses the [Room.json](https://github.com/Azure-Samples/digital-twins-
Before you run the sample, do the following: 1. Download the model file, place it in your project, and replace the `<path-to>` placeholder in the code below to tell your program where to find it. 2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's hostname.
-3. Add these packages to your project:
- ```cmd/sh
- dotnet add package Azure.DigitalTwins.Core --version 1.0.0-preview.3
- dotnet add package Azure.identity
- ```
+3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
+ * [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
+ * [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
You'll also need to set up local credentials if you want to run the sample directly. The next section walks through this. [!INCLUDE [Azure Digital Twins: local credentials prereq (outer)](../../includes/digital-twins-local-credentials-outer.md)]
@@ -386,153 +239,8 @@ You'll also need to set up local credentials if you want to run the sample direc
After completing the above steps, you can directly run the following sample code.
-```csharp
-using System;
-using Azure.DigitalTwins.Core;
-using Azure.Identity;
-using System.Threading.Tasks;
-using System.IO;
-using System.Collections.Generic;
-using Azure;
-using Azure.DigitalTwins.Core.Serialization;
-using System.Text.Json;
-
-namespace minimal
-{
- class Program
- {
-
- public static async Task Main(string[] args)
- {
- Console.WriteLine("Hello World!");
-
- //Create the Azure Digital Twins client for API calls
- string adtInstanceUrl = "https://<your-instance-hostname>";
- var credentials = new DefaultAzureCredential();
- DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credentials);
- Console.WriteLine($"Service client created ΓÇô ready to go");
- Console.WriteLine();
-
- //Upload models
- Console.WriteLine($"Upload a model");
- Console.WriteLine();
- string dtdl = File.ReadAllText("<path-to>/Room.json");
- var typeList = new List<string>();
- typeList.Add(dtdl);
- // Upload the model to the service
- await client.CreateModelsAsync(typeList);
-
- //Create new digital twin
- BasicDigitalTwin twin = new BasicDigitalTwin();
- string twin_Id = "myRoomId";
- twin.Metadata = new DigitalTwinMetadata();
- twin.Metadata.ModelId = "dtmi:example:Room;1";
- // Initialize properties
- Dictionary<string, object> props = new Dictionary<string, object>();
- props.Add("Temperature", 35.0);
- props.Add("Humidity", 55.0);
- twin.Contents = props;
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>(twin_Id, twin);
- Console.WriteLine("Twin created successfully");
- Console.WriteLine();
-
- //Print twin
- Console.WriteLine("--- Printing twin details:");
- twin = FetchAndPrintTwin(twin_Id, client);
- Console.WriteLine("--------");
- Console.WriteLine();
-
- //Update twin data
- var updateTwinData = new JsonPatchDocument();
- updateTwinData.AppendAdd("/Temperature", 25.0);
- await client.UpdateDigitalTwinAsync(twin_Id, updateTwinData);
- Console.WriteLine("Twin properties updated");
- Console.WriteLine();
-
- //Print twin again
- Console.WriteLine("--- Printing twin details (after update):");
- FetchAndPrintTwin(twin_Id, client);
- Console.WriteLine("--------");
- Console.WriteLine();
-
- //Delete twin
- await DeleteTwin(client, twin_Id);
- }
-
- private static BasicDigitalTwin FetchAndPrintTwin(string twin_Id, DigitalTwinsClient client)
- {
- BasicDigitalTwin twin;
- Response<BasicDigitalTwin> twin = client.GetDigitalTwin(twin_Id);
- Console.WriteLine($"Model id: {twin.Metadata.ModelId}");
- foreach (string prop in twin.Contents.Keys)
- {
- if (twin.Contents.TryGetValue(prop, out object value))
- Console.WriteLine($"Property '{prop}': {value}");
- }
-
- return twin;
- }
- private static async Task DeleteTwin(DigitalTwinsClient client, string id)
- {
- await FindAndDeleteOutgoingRelationshipsAsync(client, id);
- await FindAndDeleteIncomingRelationshipsAsync(client, id);
- try
- {
- await client.DeleteDigitalTwinAsync(id);
- Console.WriteLine("Twin deleted successfully");
- }
- catch (RequestFailedException exc)
- {
- Console.WriteLine($"*** Error:{exc.Message}");
- }
- }
-
- private static async Task FindAndDeleteOutgoingRelationshipsAsync(DigitalTwinsClient client, string dtId)
- {
- // Find the relationships for the twin
-
- try
- {
- // GetRelationshipsAsync will throw an error if a problem occurs
- AsyncPageable<BasicRelationship> rels = client.GetRelationshipsAsync<BasicRelationship>(dtId);
-
- await foreach (BasicRelationship rel in rels)
- {
- await client.DeleteRelationshipAsync(dtId, rel.Id).ConfigureAwait(false);
- Console.WriteLine($"Deleted relationship {rel.Id} from {dtId}");
- }
- }
- catch (RequestFailedException ex)
- {
- Console.WriteLine($"*** Error {ex.Status}/{ex.ErrorCode} retrieving or deleting relationships for {dtId} due to {ex.Message}");
- }
- }
-
- private static async Task FindAndDeleteIncomingRelationshipsAsync(DigitalTwinsClient client, string dtId)
- {
- // Find the relationships for the twin
-
- try
- {
- // GetRelationshipsAsync will throw an error if a problem occurs
- AsyncPageable<IncomingRelationship> incomingRels = client.GetIncomingRelationshipsAsync(dtId);
-
- await foreach (IncomingRelationship incomingRel in incomingRels)
- {
- await client.DeleteRelationshipAsync(incomingRel.SourceId, incomingRel.RelationshipId).ConfigureAwait(false);
- Console.WriteLine($"Deleted incoming relationship {incomingRel.RelationshipId} from {dtId}");
- }
- }
- catch (RequestFailedException ex)
- {
- Console.WriteLine($"*** Error {ex.Status}/{ex.ErrorCode} retrieving or deleting incoming relationships for {dtId} due to {ex.Message}");
- }
- }
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs":::
- }
-}
-
-```
Here is the console output of the above program: :::image type="content" source="./media/how-to-manage-twin/console-output-manage-twins.png" alt-text="Console output showing that the twin is created, updated, and deleted" lightbox="./media/how-to-manage-twin/console-output-manage-twins.png":::
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-parse-models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-parse-models.md
@@ -78,118 +78,11 @@ You can use the parser library directly, for things like validating models in yo
To support the parser code example below, consider several models defined in an Azure Digital Twins instance:
-```json
-[
- {
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:contoso:coffeeMaker;1",
- "@type": "Interface",
- "contents": [
- {
- "@type": "Component",
- "name": "coffeeMaker",
- "schema": "dtmi:com:contoso:coffeeMakerInterface;1"
- }
- ]
- },
- {
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:contoso:coffeeMakerInterface;1",
- "@type": "Interface",
- "contents": [
- {
- "@type": "Property",
- "name": "waterTemp",
- "schema": "double"
- }
- ]
- },
- {
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:contoso:coffeeBar;1",
- "@type": "Interface",
- "contents": [
- {
- "@type": "Relationship",
- "name": "foo",
- "target": "dtmi:com:contoso:coffeeMaker;1"
- },
- {
- "@type": "Property",
- "name": "capacity",
- "schema": "integer"
- }
- ]
- }
-]
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/coffeeMaker-coffeeMakerInterface-coffeeBar.json":::
The following code shows an example of how to use the parser library to reflect on these definitions in C#:
-```csharp
-async void ParseDemo(DigitalTwinsClient client)
-{
- try
- {
- AsyncPageable<DigitalTwinsModelData> mdata = client.GetModelsAsync(new GetModelsOptions { IncludeModelDefinition = true });
- List<string> models = new List<string>();
- await foreach (DigitalTwinsModelData md in mdata)
- models.Add(md.DtdlModel);
- ModelParser parser = new ModelParser();
- IReadOnlyDictionary<Dtmi, DTEntityInfo> dtdlOM = await parser.ParseAsync(models);
-
- List<DTInterfaceInfo> interfaces = new List<DTInterfaceInfo>();
- IEnumerable<DTInterfaceInfo> ifenum =
- from entity in dtdlOM.Values
- where entity.EntityKind == DTEntityKind.Interface
- select entity as DTInterfaceInfo;
- interfaces.AddRange(ifenum);
- foreach (DTInterfaceInfo dtif in interfaces)
- {
- PrintInterfaceContent(dtif, dtdlOM);
- }
-
- } catch (RequestFailedException rex)
- {
-
- }
-}
-
-void PrintInterfaceContent(DTInterfaceInfo dtif, IReadOnlyDictionary<Dtmi, DTEntityInfo> dtdlOM, int indent=0)
-{
- StringBuilder sb = new StringBuilder();
- for (int i = 0; i < indent; i++) sb.Append(" ");
- Console.WriteLine($"{sb}Interface: {dtif.Id} | {dtif.DisplayName}");
- SortedDictionary<string, DTContentInfo> contents = dtif.Contents;
- foreach (DTContentInfo item in contents.Values)
- {
- switch (item.EntityKind)
- {
- case DTEntityKind.Property:
- DTPropertyInfo pi = item as DTPropertyInfo;
- Console.WriteLine($"{sb}--Property: {pi.Name} with schema {pi.Schema}");
- break;
- case DTEntityKind.Relationship:
- DTRelationshipInfo ri = item as DTRelationshipInfo;
- Console.WriteLine($"{sb}--Relationship: {ri.Name} with target {ri.Target}");
- break;
- case DTEntityKind.Telemetry:
- DTTelemetryInfo ti = item as DTTelemetryInfo;
- Console.WriteLine($"{sb}--Telemetry: {ti.Name} with schema {ti.Schema}");
- break;
- case DTEntityKind.Component:
- DTComponentInfo ci = item as DTComponentInfo;
- Console.WriteLine($"{sb}--Component: {ci.Id} | {ci.Name}");
- dtdlOM.TryGetValue(ci.Id, out DTEntityInfo value);
- DTInterfaceInfo component = value as DTInterfaceInfo;
- PrintInterfaceContent(component, dtdlOM, indent + 1);
- break;
- default:
- break;
- }
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/parseModels.cs":::
## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-provision-using-device-provisioning-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
@@ -86,150 +86,7 @@ Inside your function app project, add a new function. Also, add a new NuGet pack
In the newly created function code file, paste in the following code.
-```C#
-using System;
-using System.IO;
-using System.Threading.Tasks;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Extensions.Http;
-using Microsoft.AspNetCore.Http;
-using Microsoft.Extensions.Logging;
-using Microsoft.Azure.Devices.Shared;
-using Microsoft.Azure.Devices.Provisioning.Service;
-using System.Net.Http;
-using Azure.Identity;
-using Azure.DigitalTwins.Core;
-using Azure.Core.Pipeline;
-using Azure;
-using System.Collections.Generic;
-using Newtonsoft.Json;
-using Newtonsoft.Json.Linq;
-
-namespace Samples.AdtIothub
-{
- public static class DpsAdtAllocationFunc
- {
- const string adtAppId = "https://digitaltwins.azure.net";
- private static string adtInstanceUrl = Environment.GetEnvironmentVariable("ADT_SERVICE_URL");
- private static readonly HttpClient httpClient = new HttpClient();
-
- [FunctionName("DpsAdtAllocationFunc")]
- public static async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger log)
- {
- // Get request body
- string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
- log.LogDebug($"Request.Body: {requestBody}");
- dynamic data = JsonConvert.DeserializeObject(requestBody);
-
- // Get registration ID of the device
- string regId = data?.deviceRuntimeContext?.registrationId;
-
- bool fail = false;
- string message = "Uncaught error";
- ResponseObj obj = new ResponseObj();
-
- // Must have unique registration ID on DPS request
- if (regId == null)
- {
- message = "Registration ID not provided for the device.";
- log.LogInformation("Registration ID: NULL");
- fail = true;
- }
- else
- {
- string[] hubs = data?.linkedHubs.ToObject<string[]>();
-
- // Must have hubs selected on the enrollment
- if (hubs == null)
- {
- message = "No hub group defined for the enrollment.";
- log.LogInformation("linkedHubs: NULL");
- fail = true;
- }
- else
- {
- // Find or create twin based on the provided registration ID and model ID
- dynamic payloadContext = data?.deviceRuntimeContext?.payload;
- string dtmi = payloadContext.modelId;
- log.LogDebug($"payload.modelId: {dtmi}");
- string dtId = await FindOrCreateTwin(dtmi, regId, log);
-
- // Get first linked hub (TODO: select one of the linked hubs based on policy)
- obj.iotHubHostName = hubs[0];
-
- // Specify the initial tags for the device.
- TwinCollection tags = new TwinCollection();
- tags["dtmi"] = dtmi;
- tags["dtId"] = dtId;
-
- // Specify the initial desired properties for the device.
- TwinCollection properties = new TwinCollection();
-
- // Add the initial twin state to the response.
- TwinState twinState = new TwinState(tags, properties);
- obj.initialTwin = twinState;
- }
- }
-
- log.LogDebug("Response: " + ((obj.iotHubHostName != null) ? JsonConvert.SerializeObject(obj) : message));
-
- return (fail)
- ? new BadRequestObjectResult(message)
- : (ActionResult)new OkObjectResult(obj);
- }
-
- public static async Task<string> FindOrCreateTwin(string dtmi, string regId, ILogger log)
- {
- // Create Digital Twins client
- var cred = new ManagedIdentityCredential(adtAppId);
- var client = new DigitalTwinsClient(new Uri(adtInstanceUrl), cred, new DigitalTwinsClientOptions { Transport = new HttpClientTransport(httpClient) });
-
- // Find existing twin with registration ID
- string dtId;
- string query = $"SELECT * FROM DigitalTwins T WHERE $dtId = '{regId}' AND IS_OF_MODEL('{dtmi}')";
- AsyncPageable<string> twins = client.QueryAsync(query);
-
- await foreach (string twinJson in twins)
- {
- // Get DT ID from the Twin
- JObject twin = (JObject)JsonConvert.DeserializeObject(twinJson);
- dtId = (string)twin["$dtId"];
- log.LogInformation($"Twin '{dtId}' with Registration ID '{regId}' found in DT");
- return dtId;
- }
-
- // Not found, so create new twin
- log.LogInformation($"Twin ID not found, setting DT ID to regID");
- dtId = regId; // use the Registration ID as the DT ID
-
- // Define the model type for the twin to be created
- Dictionary<string, object> meta = new Dictionary<string, object>()
- {
- { "$model", dtmi }
- };
- // Initialize the twin properties
- Dictionary<string, object> twinProps = new Dictionary<string, object>()
- {
- { "$metadata", meta }
- };
- twinProps.Add("Temperature", 0.0);
-
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>(dtId, twinProps);
- log.LogInformation($"Twin '{dtId}' created in DT");
-
- return dtId;
- }
- }
-
- public class ResponseObj
- {
- public string iotHubHostName { get; set; }
- public TwinState initialTwin { get; set; }
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIotHub_allocate.cs":::
Save the file and then re-publish your function app. For instructions on publishing the function app, see the [*Publish the app*](tutorial-end-to-end.md#publish-the-app) section of the end-to-end tutorial.
@@ -326,115 +183,7 @@ This function will use the IoT Hub device lifecycle event to retire an existing
Inside your published function app, add a new function class of type *Event Hub Trigger*, and paste in the code below.
-```C#
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Net.Http;
-using System.Threading.Tasks;
-using Azure;
-using Azure.Core.Pipeline;
-using Azure.DigitalTwins.Core;
-using Azure.DigitalTwins.Core.Serialization;
-using Azure.Identity;
-using Microsoft.Azure.EventHubs;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Extensions.Logging;
-using Newtonsoft.Json;
-using Newtonsoft.Json.Linq;
-
-namespace Samples.AdtIothub
-{
- public static class DeleteDeviceInTwinFunc
- {
- private static string adtAppId = "https://digitaltwins.azure.net";
- private static readonly string adtInstanceUrl = System.Environment.GetEnvironmentVariable("ADT_SERVICE_URL", EnvironmentVariableTarget.Process);
- private static readonly HttpClient httpClient = new HttpClient();
-
- [FunctionName("DeleteDeviceInTwinFunc")]
- public static async Task Run(
- [EventHubTrigger("lifecycleevents", Connection = "EVENTHUB_CONNECTIONSTRING")] EventData[] events, ILogger log)
- {
- var exceptions = new List<Exception>();
-
- foreach (EventData eventData in events)
- {
- try
- {
- //log.LogDebug($"EventData: {System.Text.Json.JsonSerializer.Serialize(eventData)}");
-
- string opType = eventData.Properties["opType"] as string;
- if (opType == "deleteDeviceIdentity")
- {
- string deviceId = eventData.Properties["deviceId"] as string;
-
- // Create Digital Twin client
- var cred = new ManagedIdentityCredential(adtAppId);
- var client = new DigitalTwinsClient(new Uri(adtInstanceUrl), cred, new DigitalTwinsClientOptions { Transport = new HttpClientTransport(httpClient) });
-
- // Find twin based on the original Registration ID
- string regID = deviceId; // simple mapping
- string dtId = await GetTwinId(client, regID, log);
- if (dtId != null)
- {
- await DeleteRelationships(client, dtId, log);
-
- // Delete twin
- await client.DeleteDigitalTwinAsync(dtId);
- log.LogInformation($"Twin '{dtId}' deleted in DT");
- }
- }
- }
- catch (Exception e)
- {
- // We need to keep processing the rest of the batch - capture this exception and continue.
- exceptions.Add(e);
- }
- }
-
- if (exceptions.Count > 1)
- throw new AggregateException(exceptions);
-
- if (exceptions.Count == 1)
- throw exceptions.Single();
- }
--
- public static async Task<string> GetTwinId(DigitalTwinsClient client, string regId, ILogger log)
- {
- string query = $"SELECT * FROM DigitalTwins T WHERE T.$dtId = '{regId}'";
- AsyncPageable<string> twins = client.QueryAsync(query);
- await foreach (string twinJson in twins)
- {
- JObject twin = (JObject)JsonConvert.DeserializeObject(twinJson);
- string dtId = (string)twin["$dtId"];
- log.LogInformation($"Twin '{dtId}' found in DT");
- return dtId;
- }
-
- return null;
- }
-
- public static async Task DeleteRelationships(DigitalTwinsClient client, string dtId, ILogger log)
- {
- var relationshipIds = new List<string>();
-
- AsyncPageable<string> relationships = client.GetRelationshipsAsync(dtId);
- await foreach (var relationshipJson in relationships)
- {
- BasicRelationship relationship = System.Text.Json.JsonSerializer.Deserialize<BasicRelationship>(relationshipJson);
- relationshipIds.Add(relationship.Id);
- }
-
- foreach (var relationshipId in relationshipIds)
- {
- client.DeleteRelationship(dtId, relationshipId);
- log.LogInformation($"Twin '{dtId}' relationship '{relationshipId}' deleted in DT");
- }
- }
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIotHub_delete.cs":::
Save the project, then publish the function app again. For instructions on publishing the function app, see the [*Publish the app*](tutorial-end-to-end.md#publish-the-app) section of the end-to-end tutorial.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-query-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
@@ -29,43 +29,28 @@ This article begins with sample queries that illustrate the query language struc
Here is the basic query that will return a list of all digital twins in the instance:
-```sql
-SELECT *
-FROM DIGITALTWINS
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="GetAllTwins":::
## Query by property Get digital twins by **properties** (including ID and metadata):
-```sql
-SELECT *
-FROM DigitalTwins T
-WHERE T.firmwareVersion = '1.1'
-AND T.$dtId in ['123', '456']
-AND T.Temperature = 70
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByProperty1":::
> [!TIP] > The ID of a digital twin is queried using the metadata field `$dtId`. You can also get twins based on **whether a certain property is defined**. Here is a query that gets twins that have a defined *Location* property:
-```sql
-SELECT *ΓÇï FROM DIGITALTWINS WHERE IS_DEFINED(Location)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByProperty2":::
This can help you to get twins by their *tag* properties, as described in [Add tags to digital twins](how-to-use-tags.md). Here is a query that gets all twins tagged with *red*:
-```sql
-SELECT * FROM DIGITALTWINS WHERE IS_DEFINED(tags.red)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryMarkerTags1":::
You can also get twins based on the **type of a property**. Here is a query that gets twins whose *Temperature* property is a number:
-```sql
-SELECT * FROM DIGITALTWINSΓÇï T WHERE IS_NUMBER(T.Temperature)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByProperty3":::
## Query by model
@@ -83,30 +68,22 @@ So for example, if you query for twins of the model `dtmi:example:widget;4`, the
The simplest use of `IS_OF_MODEL` takes only a `twinTypeName` parameter: `IS_OF_MODEL(twinTypeName)`. Here is a query example that passes a value in this parameter:
-```sql
-SELECT * FROM DIGITALTWINS WHERE IS_OF_MODEL('dtmi:example:thing;1')
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByModel1":::
To specify a twin collection to search when there is more than one (like when a `JOIN` is used), add the `twinCollection` parameter: `IS_OF_MODEL(twinCollection, twinTypeName)`. Here is a query example that adds a value for this parameter:
-```sql
-SELECT * FROM DIGITALTWINS DT WHERE IS_OF_MODEL(DT, 'dtmi:example:thing;1')
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByModel2":::
To do an exact match, add the `exact` parameter: `IS_OF_MODEL(twinTypeName, exact)`. Here is a query example that adds a value for this parameter:
-```sql
-SELECT * FROM DIGITALTWINS WHERE IS_OF_MODEL('dtmi:example:thing;1', exact)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByModel3":::
You can also pass all three arguments together: `IS_OF_MODEL(twinCollection, twinTypeName, exact)`. Here is a query example specifying a value for all three parameters:
-```sql
-SELECT ROOM FROM DIGITALTWINS DT WHERE IS_OF_MODEL(DT, 'dtmi:example:thing;1', exact)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByModel4":::
## Query by relationship
@@ -128,12 +105,7 @@ To get a dataset that includes relationships, use a single `FROM` statement foll
Here is a sample relationship-based query. This code snippet selects all digital twins with an *ID* property of 'ABC', and all digital twins related to these digital twins via a *contains* relationship.
-```sql
-SELECT T, CT
-FROM DIGITALTWINS T
-JOIN CT RELATED T.contains
-WHERE T.$dtId = 'ABC'
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByRelationship1":::
> [!NOTE] > The developer does not need to correlate this `JOIN` with a key value in the `WHERE` clause (or specify a key value inline with the `JOIN` definition). This correlation is computed automatically by the system, as the relationship properties themselves identify the target entity.
@@ -145,13 +117,7 @@ The Azure Digital Twins query language allows filtering and projection of relati
As an example, consider a *servicedBy* relationship that has a *reportedCondition* property. In the below query, this relationship is given an alias of 'R' in order to reference its property.
-```sql
-SELECT T, SBT, R
-FROM DIGITALTWINS T
-JOIN SBT RELATED T.servicedBy R
-WHERE T.$dtId = 'ABC'
-AND R.reportedCondition = 'clean'
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByRelationship2":::
In the example above, note how *reportedCondition* is a property of the *servicedBy* relationship itself (NOT of some digital twin that has a *servicedBy* relationship).
@@ -161,58 +127,27 @@ Up to five `JOIN`s are supported in a single query. This allows you to traverse
Here is an example of a multi-join query, which gets all the light bulbs contained in the light panels in rooms 1 and 2.
-```sql
-SELECT LightBulb
-FROM DIGITALTWINS Room
-JOIN LightPanel RELATED Room.contains
-JOIN LightBulb RELATED LightPanel.contains
-WHERE IS_OF_MODEL(LightPanel, 'dtmi:contoso:com:lightpanel;1')
-AND IS_OF_MODEL(LightBulb, 'dtmi:contoso:com:lightbulb ;1')
-AND Room.$dtId IN ['room1', 'room2']
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByRelationship3":::
## Count items You can count the number of items in a result set using the `Select COUNT` clause:
-```sql
-SELECT COUNT()
-FROM DIGITALTWINS
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="SelectCount1":::
Add a `WHERE` clause to count the number of items that meet a certain criteria. Here are some examples of counting with an applied filter based on the type of twin model (for more on this syntax, see [*Query by model*](#query-by-model) below):
-```sql
-SELECT COUNT()
-FROM DIGITALTWINS
-WHERE IS_OF_MODEL('dtmi:sample:Room;1')
-
-SELECT COUNT()
-FROM DIGITALTWINS c
-WHERE IS_OF_MODEL('dtmi:sample:Room;1') AND c.Capacity > 20
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="SelectCount2":::
You can also use `COUNT` along with the `JOIN` clause. Here is a query that counts all the light bulbs contained in the light panels of rooms 1 and 2:
-```sql
-SELECT COUNT()
-FROM DIGITALTWINS Room
-JOIN LightPanel RELATED Room.contains
-JOIN LightBulb RELATED LightPanel.contains
-WHERE IS_OF_MODEL(LightPanel, 'dtmi:contoso:com:lightpanel;1')
-AND IS_OF_MODEL(LightBulb, 'dtmi:contoso:com:lightbulb ;1')
-AND Room.$dtId IN ['room1', 'room2']
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="SelectCount3":::
## Filter results: select top items You can select the several "top" items in a query using the `Select TOP` clause.
-```sql
-SELECT TOP (5)
-FROM DIGITALTWINS
-WHERE ...
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="SelectTop":::
## Filter results: specify return set with projections
@@ -223,54 +158,25 @@ By using projections in the `SELECT` statement, you can choose which columns a q
Here is an example of a query that uses projection to return twins and relationships. The following query projects the *Consumer*, *Factory* and *Edge* from a scenario where a *Factory* with an ID of *ABC* is related to the *Consumer* through a relationship of *Factory.customer*, and that relationship is presented as the *Edge*.
-```sql
-SELECT Consumer, Factory, Edge
-FROM DIGITALTWINS Factory
-JOIN Consumer RELATED Factory.customer Edge
-WHERE Factory.$dtId = 'ABC'
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="Projections1":::
You can also use projection to return a property of a twin. The following query projects the *Name* property of the *Consumers* that are related to the *Factory* with an ID of *ABC* through a relationship of *Factory.customer*.
-```sql
-SELECT Consumer.name
-FROM DIGITALTWINS Factory
-JOIN Consumer RELATED Factory.customer Edge
-WHERE Factory.$dtId = 'ABC'
-AND IS_PRIMITIVE(Consumer.name)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="Projections2":::
You can also use projection to return a property of a relationship. Like in the previous example, the following query projects the *Name* property of the *Consumers* related to the *Factory* with an ID of *ABC* through a relationship of *Factory.customer*; but now it also returns two properties of that relationship, *prop1* and *prop2*. It does this by naming the relationship *Edge* and gathering its properties.
-```sql
-SELECT Consumer.name, Edge.prop1, Edge.prop2, Factory.area
-FROM DIGITALTWINS Factory
-JOIN Consumer RELATED Factory.customer Edge
-WHERE Factory.$dtId = 'ABC'
-AND IS_PRIMITIVE(Factory.area) AND IS_PRIMITIVE(Consumer.name) AND IS_PRIMITIVE(Edge.prop1) AND IS_PRIMITIVE(Edge.prop2)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="Projections3":::
You can also use aliases to simplify queries with projection. The following query does the same operations as the previous example, but it aliases the property names to `consumerName`, `first`, `second`, and `factoryArea`.
-```sql
-SELECT Consumer.name AS consumerName, Edge.prop1 AS first, Edge.prop2 AS second, Factory.area AS factoryArea
-FROM DIGITALTWINS Factory
-JOIN Consumer RELATED Factory.customer Edge
-WHERE Factory.$dtId = 'ABC'
-AND IS_PRIMITIVE(Factory.area) AND IS_PRIMITIVE(Consumer.name) AND IS_PRIMITIVE(Edge.prop1) AND IS_PRIMITIVE(Edge.prop2)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="Projections4":::
Here is a similar query that queries the same set as above, but projects only the *Consumer.name* property as `consumerName`, and projects the complete *Factory* as a twin.
-```sql
-SELECT Consumer.name AS consumerName, Factory
-FROM DIGITALTWINS Factory
-JOIN Consumer RELATED Factory.customer Edge
-WHERE Factory.$dtId = 'ABC'
-AND IS_PRIMITIVE(Factory.area) AND IS_PRIMITIVE(Consumer.name)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="Projections5":::
## Build efficient queries with the IN operator
@@ -280,12 +186,7 @@ For example, consider a scenario in which *Buildings* contain *Floors* and *Floo
1. Find floors in the building based on the `contains` relationship.
- ```sql
- SELECT Floor
- FROM DIGITALTWINS Building
- JOIN Floor RELATED Building.contains
- WHERE Building.$dtId = @buildingId
- ```
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="INOperatorWithout":::
2. To find rooms, instead of considering the floors one-by-one and running a `JOIN` query to find the rooms for each one, you can query with a collection of the floors in the building (named *Floor* in the query below).
@@ -297,26 +198,18 @@ For example, consider a scenario in which *Buildings* contain *Floors* and *Floo
In query:
- ```sql
-
- SELECT Room
- FROM DIGITALTWINS Floor
- JOIN Room RELATED Floor.contains
- WHERE Floor.$dtId IN ['floor1','floor2', ..'floorn']
- AND Room. Temperature > 72
- AND IS_OF_MODEL(Room, 'dtmi:com:contoso:Room;1')
-
- ```
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="INOperatorWith":::
## Other compound query examples You can **combine** any of the above types of query using combination operators to include more detail in a single query. Here are some additional examples of compound queries that query for more than one type of twin descriptor at once.
-| Description | Query |
-| --- | --- |
-| Out of the devices that *Room 123* has, return the MxChip devices that serve the role of Operator | `SELECT deviceΓÇï`<br>ΓÇï`FROM DigitalTwins spaceΓÇï`ΓÇï<br>ΓÇï`JOIN device RELATED space.hasΓÇï`<br>ΓÇï`WHERE space.$dtid = 'Room 123'`ΓÇï<br>ΓÇï`AND device.$metadata.model = 'dtmi:contoso:com:DigitalTwins:MxChip:3'`<br>ΓÇï`AND has.role = 'Operator'` ΓÇï|
-| Get twins that have a relationship named *Contains* with another twin that has an ID of *id1* | ΓÇï`ΓÇïSELECT RoomΓÇï`ΓÇï<br>ΓÇï`FROM DIGITALTWINS RoomΓÇïΓÇï`ΓÇï<br>ΓÇï`JOIN Thermostat RELATED Room.ContainsΓÇïΓÇï`ΓÇï<br>ΓÇï`WHERE Thermostat.$dtId = 'id1'`ΓÇï |
-| Get all the rooms of this room model that are contained by *floor11* | `SELECT Room`ΓÇï<br>ΓÇï`FROM DIGITALTWINS FloorΓÇï`ΓÇï<br>ΓÇï`JOIN Room RELATED Floor.ContainsΓÇï`ΓÇï<br>ΓÇï`WHERE Floor.$dtId = 'floor11'ΓÇï`ΓÇï<br>ΓÇï`AND IS_OF_MODEL(Room, 'dtmi:contoso:com:DigitalTwins:Room;1')ΓÇï` |
+* Out of the devices that *Room 123* has, return the MxChip devices that serve the role of Operator
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="OtherExamples1":::
+* Get twins that have a relationship named *Contains* with another twin that has an ID of *id1*
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="OtherExamples2":::
+* Get all the rooms of this room model that are contained by *floor11*
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="OtherExamples3":::
## Run queries with the API
@@ -326,42 +219,13 @@ You can call the API directly, or use one of the [SDKs](how-to-use-apis-sdks.md#
The following code snippet illustrates the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true) call from a client app:
-```csharp
- string adtInstanceEndpoint = "https://<your-instance-hostname>";
-
- var credential = new DefaultAzureCredential();
- DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceEndpoint), credential);
-
- // Run a query for all twins
- string query = "SELECT * FROM DIGITALTWINS";
- AsyncPageable<BasicDigitalTwin> result = client.QueryAsync<BasicDigitalTwin>(query);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/queries.cs" id="RunQuery":::
This call returns query results in the form of a [BasicDigitalTwin](/dotnet/api/azure.digitaltwins.core.basicdigitaltwin?view=azure-dotnet&preserve-view=true) object. Query calls support paging. Here is a complete example using `BasicDigitalTwin` as query result type with error handling and paging:
-```csharp
-try
-{
- await foreach(BasicDigitalTwin twin in result)
- {
- // You can include your own logic to print the result
- // The logic below prints the twin's ID and contents
- Console.WriteLine($"Twin ID: {twin.Id} \nTwin data");
- IDictionary<string, object> contents = twin.Contents;
- foreach (KeyValuePair<string, object> kvp in contents)
- {
- Console.WriteLine($"{kvp.Key} {kvp.Value}");
- }
- }
-}
-catch (RequestFailedException e)
-{
- Console.WriteLine($"Error {e.Status}: {e.Message}");
- throw;
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/queries.cs" id="FullQuerySample":::
## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-set-up-instance-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-cli.md
@@ -36,12 +36,13 @@ This version of this article goes through these steps manually, one by one, usin
## Create the Azure Digital Twins instance In this section, you will **create a new instance of Azure Digital Twins** using the Cloud Shell command. You'll need to provide:
-* A resource group to deploy it in. If you don't already have an existing resource group in mind, you can create one now with this command:
+* A resource group where the instance will be deployed. If you don't already have an existing resource group in mind, you can create one now with this command:
```azurecli-interactive az group create --location <region> --name <name-for-your-resource-group> ``` * A region for the deployment. To see what regions support Azure Digital Twins, visit [*Azure products available by region*](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins).
-* A name for your instance. The name of the new instance must be unique within the region for your subscription (meaning that if your subscription has another Azure Digital Twins instance in the region that's already using the name you choose, you'll be asked to pick a different name).
+* A name for your instance. If your subscription has another Azure Digital Twins instance in the region that's
+ already using the specified name, you'll be asked to pick a different name.
Use these values in the following command to create the instance:
@@ -55,7 +56,7 @@ If the instance was created successfully, the result in Cloud Shell looks someth
:::image type="content" source="media/how-to-set-up-instance/cloud-shell/create-instance.png" alt-text="Command window with successful creation of resource group and Azure Digital Twins instance":::
-Note the Azure Digital Twins instance's *hostName*, *name*, and *resourceGroup* from the output. These are all important values that you may need as you continue working with your Azure Digital Twins instance, to set up authentication and related Azure resources. If other users will be programming against the instance, you should share these values with them.
+Note the Azure Digital Twins instance's **hostName**, **name**, and **resourceGroup** from the output. These are all important values that you may need as you continue working with your Azure Digital Twins instance, to set up authentication and related Azure resources. If other users will be programming against the instance, you should share these values with them.
> [!TIP] > You can see these properties, along with all the properties of your instance, at any time by running `az dt show --dt-name <your-Azure-Digital-Twins-instance>`.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-set-up-instance-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-portal.md
@@ -45,7 +45,8 @@ On the following *Create Resource* page, fill in the values given below:
* **Subscription**: The Azure subscription you're using - **Resource group**: A resource group in which to deploy the instance. If you don't already have an existing resource group in mind, you can create one here by selecting the *Create new* link and entering a name for a new resource group * **Location**: An Azure Digital Twins-enabled region for the deployment. For more details on regional support, visit [*Azure products available by region (Azure Digital Twins)*](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins).
-* **Resource name**: A name for your Azure Digital Twins instance. The name of the new instance must be unique within the region for your subscription (meaning that if your subscription has another Azure Digital Twins instance in the region that's already using the name you choose, you'll be asked to pick a different name).
+* **Resource name**: A name for your Azure Digital Twins instance. If your subscription has another Azure Digital Twins instance in the region that's
+ already using the specified name, you'll be asked to pick a different name.
:::image type="content" source= "media/how-to-set-up-instance/portal/create-azure-digital-twins-2.png" alt-text="Filling in the described values to create an Azure Digital Twins resource":::
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-set-up-instance-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-powershell.md new file mode 100644
@@ -0,0 +1,160 @@
+---
+# Mandatory fields.
+title: Set up an instance and authentication (PowerShell)
+titleSuffix: Azure Digital Twins
+description: See how to set up an instance of the Azure Digital Twins service using Azure PowerShell
+author: baanders
+ms.author: baanders # Microsoft employees only
+ms.date: 12/16/2020
+ms.topic: how-to
+ms.service: digital-twins
+
+# Optional fields. Don't forget to remove # if you need a field.
+ms.custom: devx-track-azurepowershell
+# ms.reviewer: MSFT-alias-of-reviewer
+# manager: MSFT-alias-of-manager-or-PM-counterpart
+---
+
+# Set up an Azure Digital Twins instance and authentication (PowerShell)
+
+[!INCLUDE [digital-twins-setup-selector.md](../../includes/digital-twins-setup-selector.md)]
+
+This article covers the steps to **set up a new Azure Digital Twins instance**, including creating
+the instance and setting up authentication. After completing this article, you will have an Azure
+Digital Twins instance ready to start programming against.
+
+This version of this article goes through these steps manually, one by one, using [Azure PowerShell](/powershell/azure/new-azureps-module-az).
+
+* To go through these steps manually using the Azure portal, see the portal version of this article: [*How-to: Set up an instance and authentication (portal)*](how-to-set-up-instance-portal.md).
+* To run through an automated setup using a deployment script sample, see the scripted version of this article: [*How-to: Set up an instance and authentication (scripted)*](how-to-set-up-instance-scripted.md).
+
+[!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)]
+[!INCLUDE [digital-twins-setup-permissions.md](../../includes/digital-twins-setup-permissions.md)]
+
+## Prepare your environment
+
+1. First, choose where to run the commands in this article. You can choose to run Azure PowerShell commands using a local installation of Azure PowerShell, or in a browser window using [Azure Cloud Shell](https://shell.azure.com).
+ * If you choose to use Azure PowerShell locally:
+ 1. [Install the Az PowerShell module](/powershell/azure/install-az-ps).
+ 1. Open a PowerShell window on your machine.
+ 1. Connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+ * If you choose to use Azure Cloud Shell:
+ 1. See [Overview of Azure Cloud Shell](../cloud-shell/overview.md) for more information about Cloud Shell.
+ 1. Open a Cloud Shell window by following [this link](https://shell.azure.com) in your browser.
+ 1. In the Cloud Shell icon bar, make sure your Cloud Shell is set to run the PowerShell version.
+
+ :::image type="content" source="media/how-to-set-up-instance/cloud-shell/cloud-shell-powershell.png" alt-text="Cloud Shell window showing selection of the PowerShell version":::
+
+1. If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription using the
+ [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+
+ ```azurepowershell-interactive
+ Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+ ```
+
+1. If this is your first time using Azure Digital Twins with this subscription, you must register the **Microsoft.DigitalTwins** resource provider. (If you're not sure, it's ok to run it again even if you've done it sometime in the past.)
+
+ ```azurepowershell-interactive
+ Register-AzResourceProvider -ProviderNamespace Microsoft.DigitalTwins
+ ```
+
+1. Use the following command to install the **Az.DigitalTwins** PowerShell module.
+ ```azurepowershell-interactive
+ Install-Module -Name Az.DigitalTwins
+ ```
+
+> [!IMPORTANT]
+> While the **Az.DigitalTwins** PowerShell module is in preview, you must install it separately
+> using the `Install-Module` cmdlet as described above. After this PowerShell module becomes generally available, it
+> will be part of future Az PowerShell module releases and available by default from within Azure
+> Cloud Shell.
+
+## Create the Azure Digital Twins instance
+
+In this section, you will **create a new instance of Azure Digital Twins** using Azure PowerShell.
+You'll need to provide:
+
+* An [Azure resource group](../azure-resource-manager/management/overview.md) where the instance will be deployed. If you don't
+ already have an existing resource group, you can create one using the
+ [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet:
+
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name <name-for-your-resource-group> -Location <region>
+ ```
+
+* A region for the deployment. To see what regions support Azure Digital Twins, visit
+ [*Azure products available by region*](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins).
+* A name for your instance. The name of the new instance must be unique within the region for your
+ subscription. If your subscription has another Azure Digital Twins instance in the region that's
+ already using the specified name, you'll be asked to pick a different name.
+
+Use your values in the following command to create the instance:
+
+```azurepowershell-interactive
+New-AzDigitalTwinsInstance -ResourceGroupName <your-resource-group> -ResourceName <name-for-your-Azure-Digital-Twins-instance> -Location <region>
+```
+
+### Verify success and collect important values
+
+If the instance was created successfully, the result looks similar to the following output
+containing information about the resource you've created:
+
+```Output
+Location Name Type
+-------- ---- ----
+<region> <name-for-your-Azure-Digital-Twins-instance> Microsoft.DigitalTwins/digitalTwinsInstances
+```
+
+Next, display the properties of your new instance by running `Get-AzDigitalTwinsInstance` and piping to `Select-Object -Property *`, like this:
+
+```azurepowershell-interactive
+Get-AzDigitalTwinsInstance -ResourceGroupName <your-resource-group> -ResourceName <name-for-your-Azure-Digital-Twins-instance> |
+ Select-Object -Property *
+```
+
+> [!TIP]
+> You can use this command to see all the properties of your instance at any time.
+
+Note the Azure Digital Twins instance's **HostName**, **Name**, and **ResourceGroup**. These are
+important values that you may need as you continue working with your Azure Digital Twins instance,
+to set up authentication, and related Azure resources. If other users will be programming against
+the instance, you should share these values with them.
+
+You now have an Azure Digital Twins instance ready to go. Next, you'll give the appropriate Azure
+user permissions to manage it.
+
+## Set up user access permissions
+
+[!INCLUDE [digital-twins-setup-role-assignment.md](../../includes/digital-twins-setup-role-assignment.md)]
+
+First, determine the **ObjectId** for the Azure AD account of the user that should be assigned the role. You can find this value using the [Get-AzAdUser](/powershell/module/az.resources/get-azaduser) cmdlet, by passing in the user principal name on the Azure AD account to retrieve their ObjectId (and other user information). In most cases, the user principal name will match the user's email on the Azure AD account.
+
+```azurepowershell-interactive
+Get-AzADUser -UserPrincipalName <Azure-AD-user-principal-name-of-user-to-assign>
+```
+
+Next, use the **ObjectId** in the following command to assign the role. The command also requires you to enter the same subscription ID, resource group name, and
+Azure Digital Twins instance name that you chose earlier when creating the instance. The command must be run by a user with
+[sufficient permissions](#prerequisites-permission-requirements) in the Azure subscription.
+
+```azurepowershell-interactive
+$Params = @{
+ ObjectId = '<Azure-AD-user-object-id-of-user-to-assign>'
+ RoleDefinitionName = 'Azure Digital Twins Data Owner'
+ Scope = '/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DigitalTwins/digitalTwinsInstances/<name-for-your-Azure-Digital-Twins-instance>'
+}
+New-AzRoleAssignment @Params
+```
+
+The result of this command is outputted information about the role assignment that's been created.
+
+### Verify success
+
+[!INCLUDE [digital-twins-setup-verify-role-assignment.md](../../includes/digital-twins-setup-verify-role-assignment.md)]
+
+You now have an Azure Digital Twins instance ready to go, and have assigned permissions to manage it.
+
+## Next steps
+
+See how to connect a client application to your instance with authentication code:
+* [*How-to: Write app authentication code*](how-to-authenticate-client.md)
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-set-up-instance-scripted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-scripted.md
@@ -71,7 +71,8 @@ Here are the steps to run the deployment script in Cloud Shell.
* For the instance: the *subscription ID* of your Azure subscription to use * For the instance: a *location* where you'd like to deploy the instance. To see what regions support Azure Digital Twins, visit [*Azure products available by region*](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins). * For the instance: a *resource group* name. You can use an existing resource group, or enter a new name of one to create.
- * For the instance: a *name* for your Azure Digital Twins instance. The name of the new instance must be unique within the region for your subscription (meaning that if your subscription has another Azure Digital Twins instance in the region that's already using the name you choose, you'll be asked to pick a different name).
+ * For the instance: a *name* for your Azure Digital Twins instance. If your subscription has another Azure Digital Twins instance in the region that's
+ already using the specified name, you'll be asked to pick a different name.
Here is an excerpt of the output log from the script:
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-use-apis-sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-apis-sdks.md
@@ -94,62 +94,25 @@ Here are some code samples illustrating use of the .NET SDK.
Authenticate against the service:
-```csharp
-// Authenticate against the service and create a client
-string adtInstanceUrl = "https://<your-Azure-Digital-Twins-instance-hostName>";
-var credential = new DefaultAzureCredential();
-DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credential);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/authentication.cs" id="DefaultAzureCredential_basic":::
[!INCLUDE [Azure Digital Twins: local credentials note](../../includes/digital-twins-local-credentials-note.md)]
-Upload a model and list models:
-
-```csharp
-// Upload a model
-var typeList = new List<string>();
-string dtdl = File.ReadAllText("SampleModel.json");
-typeList.Add(dtdl);
-try {
- await client.CreateModelsAsync(typeList);
-} catch (RequestFailedException rex) {
- Console.WriteLine($"Load model: {rex.Status}:{rex.Message}");
-}
-// Read a list of models back from the service
-AsyncPageable<DigitalTwinsModelData> modelDataList = client.GetModelsAsync();
-await foreach (DigitalTwinsModelData md in modelDataList)
-{
- Console.WriteLine($"Type name: {md.DisplayName}: {md.Id}");
-}
-```
-
-Create and query twins:
-
-```csharp
-// Initialize twin metadata
-BasicDigitalTwin twinData = new BasicDigitalTwin();
-
-twinData.Id = $"firstTwin";
-twinData.Metadata.ModelId = "dtmi:com:contoso:SampleModel;1";
-twinData.Contents.Add("data", "Hello World!");
-try {
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>("firstTwin", twinData);
-} catch(RequestFailedException rex) {
- Console.WriteLine($"Create twin error: {rex.Status}:{rex.Message}");
-}
-
-// Run a query
-AsyncPageable<string> result = client.QueryAsync("Select * From DigitalTwins");
-await foreach (string twin in result)
-{
- // Use JSON deserialization to pretty-print
- object jsonObj = JsonSerializer.Deserialize<object>(twin);
- string prettyTwin = JsonSerializer.Serialize(jsonObj, new JsonSerializerOptions { WriteIndented = true });
- Console.WriteLine(prettyTwin);
- // Or use BasicDigitalTwin for convenient property access
- BasicDigitalTwin btwin = JsonSerializer.Deserialize<BasicDigitalTwin>(twin);
-}
-```
+Upload a model:
+
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModel":::
+
+List models:
+
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="GetModels":::
+
+Create twins:
+
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="CreateTwin_withHelper":::
+
+Query twins and loop through results:
+
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/queries.cs" id="FullQuerySample":::
See the [*Tutorial: Code a client app*](tutorial-code.md) for a walk-through of this sample app code.
@@ -169,103 +132,41 @@ The available helper classes are:
You can always deserialize twin data using the JSON library of your choice, like `System.Test.Json` or `Newtonsoft.Json`. For basic access to a twin, the helper classes make this a bit more convenient.
-```csharp
-Response<BasicDigitalTwin> twin = client.GetDigitalTwin(twin_id);
-Console.WriteLine($"Model id: {twin.Metadata.ModelId}");
-```
- The `BasicDigitalTwin` helper class also gives you access to properties defined on the twin, through a `Dictionary<string, object>`. To list properties of the twin, you can use:
-```csharp
-Response<BasicDigitalTwin> twin = client.GetDigitalTwin(twin_id);
-Console.WriteLine($"Model id: {twin.Metadata.ModelId}");
-foreach (string prop in twin.Contents.Keys)
-{
- if (twin.Contents.TryGetValue(prop, out object value))
- Console.WriteLine($"Property '{prop}': {value}");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="GetTwin":::
##### Create a digital twin Using the `BasicDigitalTwin` class, you can prepare data for creating a twin instance:
-```csharp
-BasicDigitalTwin twin = new BasicDigitalTwin();
-twin.Metadata = new DigitalTwinMetadata();
-twin.Metadata.ModelId = "dtmi:example:Room;1";
-// Initialize properties
-Dictionary<string, object> props = new Dictionary<string, object>();
-props.Add("Temperature", 25.0);
-twin.Contents = props;
-
-client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>("myNewRoomID", twin);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="CreateTwin_withHelper":::
The code above is equivalent to the following "manual" variant:
-```csharp
-Dictionary<string, object> meta = new Dictionary<string, object>()
-{
- { "$model", "dtmi:example:Room;1"}
-};
-Dictionary<string, object> twin = new Dictionary<string, object>()
-{
- { "$metadata", meta },
- { "Temperature", 25.0 }
-};
-client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>("myNewRoomID", twin);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="CreateTwin_noHelper":::
##### Deserialize a relationship You can always deserialize relationship data to a type of your choice. For basic access to a relationship, use the type `BasicRelationship`.
-```csharp
-BasicRelationship res = client.GetRelationship<BasicRelationship>(twin_id, rel_id);
-Console.WriteLine($"Relationship Name: {rel.Name}");
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="GetRelationshipsCall":::
The `BasicRelationship` helper class also gives you access to properties defined on the relationship, through an `IDictionary<string, object>`. To list properties, you can use:
-```csharp
-BasicRelationship res = client.GetRelationship<BasicRelationship>(twin_id, rel_id);
-Console.WriteLine($"Relationship Name: {rel.Name}");
-foreach (string prop in rel.Contents.Keys)
-{
- if (twin.Contents.TryGetValue(prop, out object value))
- Console.WriteLine($"Property '{prop}': {value}");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_other.cs" id="ListRelationshipProperties":::
##### Create a relationship Using the `BasicRelationship` class, you can also prepare data for creating relationships on a twin instance:
-```csharp
-BasicRelationship rel = new BasicRelationship();
-rel.TargetId = "myTargetTwin";
-rel.Name = "contains"; // a relationship with this name must be defined in the model
-// Initialize properties
-Dictionary<string, object> props = new Dictionary<string, object>();
-props.Add("active", true);
-rel.Properties = props;
-client.CreateOrReplaceRelationshipAsync("mySourceTwin", "rel001", rel);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_other.cs" id="CreateRelationship_short":::
##### Create a patch for twin update Update calls for twins and relationships use [JSON Patch](http://jsonpatch.com/) structure. To create lists of JSON Patch operations, you can use the `JsonPatchDocument` as shown below.
-```csharp
-var updateTwinData = new JsonPatchDocument();
-updateTwinData.AppendAddOp("/Temperature", 25.0);
-updateTwinData.AppendAddOp("/myComponent/Property", "Hello");
-// Un-set a property
-updateTwinData.AppendRemoveOp("/Humidity");
-
-client.UpdateDigitalTwin("myTwin", updateTwinData);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="UpdateTwin":::
## General API/SDK usage notes
@@ -281,9 +182,9 @@ The following list provides additional detail and general guidelines for using t
* All service functions exist in synchronous and asynchronous versions. * All service functions throw an exception for any return status of 400 or above. Make sure you wrap calls into a `try` section, and catch at least `RequestFailedExceptions`. For more about this type of exception, see [here](/dotnet/api/azure.requestfailedexception?preserve-view=true&view=azure-dotnet). * Most service methods return `Response<T>` or (`Task<Response<T>>` for the asynchronous calls), where `T` is the class of return object for the service call. The [`Response`](/dotnet/api/azure.response-1?preserve-view=true&view=azure-dotnet) class encapsulates the service return and presents return values in its `Value` field.
-* Service methods with paged results return `Pageable<T>` or `AsyncPageable<T>` as results. For more about the `Pageable<T>` class, see [here](/dotnet/api/azure.pageable-1?preserve-view=true&view=azure-dotnet-preview); for more about `AsyncPageable<T>`, see [here](/dotnet/api/azure.asyncpageable-1?preserve-view=true&view=azure-dotnet-preview).
+* Service methods with paged results return `Pageable<T>` or `AsyncPageable<T>` as results. For more about the `Pageable<T>` class, see [here](/dotnet/api/azure.pageable-1?preserve-view=true&view=azure-dotnet); for more about `AsyncPageable<T>`, see [here](/dotnet/api/azure.asyncpageable-1?preserve-view=true&view=azure-dotnet).
* You can iterate over paged results using an `await foreach` loop. For more about this process, see [here](/archive/msdn-magazine/2019/november/csharp-iterating-with-async-enumerables-in-csharp-8).
-* The underlying SDK is `Azure.Core`. See the [Azure namespace documentation](/dotnet/api/azure?preserve-view=true&view=azure-dotnet-preview) for reference on the SDK infrastructure and types.
+* The underlying SDK is `Azure.Core`. See the [Azure namespace documentation](/dotnet/api/azure?preserve-view=true&view=azure-dotnet) for reference on the SDK infrastructure and types.
Service methods return strongly-typed objects wherever possible. However, because Azure Digital Twins is based on models custom-configured by the user at runtime (via DTDL models uploaded to the service), many service APIs take and return twin data in JSON format.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-use-tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-tags.md
@@ -33,23 +33,7 @@ Marker tags are modeled as a [DTDL](https://github.com/Azure/opendigitaltwins-dt
Here is an excerpt from a twin model implementing a marker tag as a property:
-```json
-{
-ΓÇ» "@type": "Property",
-ΓÇ» "name": "tags",
-ΓÇ» "schema": {
-    "@type": "Map",
-    "mapKey": {
-      "name": "tagName",
-      "schema": "string"
-    },
-    "mapValue": {
-      "name": "tagValue",
-      "schema": "boolean"
-    }
-ΓÇ» }
-}
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/tags.json" range="2-16":::
### Add marker tags to digital twins
@@ -57,11 +41,7 @@ Once the `tags` property is part of a digital twin's model, you can set the mark
Here is an example that populates the marker `tags` for three twins:
-```csharp
-entity-01: "tags": { "red": true, "round": true }
-entity-02: "tags": { "blue": true, "round": true }
-entity-03: "tags": { "red": true, "large": true }
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="TagPropertiesMarker":::
### Query with marker tags
@@ -69,15 +49,11 @@ Once tags have been added to digital twins, the tags can be used to filter the t
Here is a query to get all twins that have been tagged as "red":
-```sql
-SELECT * FROM digitaltwins WHERE is_defined(tags.red)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryMarkerTags1":::
You can also combine tags for more complex queries. Here is a query to get all twins that are round, and not red:
-```sql
-SELECT * FROM digitaltwins WHERE NOT is_defined(tags.red) AND is_defined(tags.round)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryMarkerTags2":::
## Value tags
@@ -89,23 +65,7 @@ Value tags are modeled as a [DTDL](https://github.com/Azure/opendigitaltwins-dtd
Here is an excerpt from a twin model implementing a value tag as a property:
-```json
-{
-ΓÇ» "@type": "Property",
-ΓÇ» "name": "tags",
-ΓÇ» "schema": {
-    "@type": "Map",
-    "mapKey": {
-      "name": "tagName",
-      "schema": "string"
-    },
-    "mapValue": {
-      "name": "tagValue",
-      "schema": "string"
-    }
-ΓÇ» }
-}
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/tags.json" range="17-31":::
### Add value tags to digital twins
@@ -113,11 +73,7 @@ As with marker tags, you can set the value tag in a digital twin by setting the
Here is an example that populates the value `tags` for three twins:
-```csharp
-entity-01: "tags": { "red": "", "size": "large" }
-entity-02: "tags": { "purple": "", "size": "small" }
-entity-03: "tags": { "red": "", "size": "small" }
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="TagPropertiesValue":::
Note that `red` and `purple` are used as marker tags in this example.
@@ -125,17 +81,13 @@ Note that `red` and `purple` are used as marker tags in this example.
As with marker tags, you can use value tags to filter the twins in queries. You can also use value tags and marker tags together.
-From the example above, `red` is being used as a marker tag. Here is a query to get all twins that have been tagged as "red":
+From the example above, `red` is being used as a marker tag. Remember that this is a query to get all twins that have been tagged as "red":
-```sql
-SELECT * FROM digitaltwins WHERE is_defined(tags.red)
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryMarkerTags1":::
Here is a query to get all entities that are small (value tag), and not red:
-```sql
-SELECT * FROM digitaltwins WHERE NOT is_defined(tags.red) AND tags.size = 'small'
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryMarkerValueTags":::
## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/quickstart-adt-explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/quickstart-adt-explorer.md
@@ -253,9 +253,7 @@ In this section, you'll run a query to answer the question of how many twins in
To see the answer, run the following query in the **QUERY EXPLORER** box.
-```SQL
-SELECT * FROM DigitalTwins T WHERE T.Temperature > 75
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="TemperatureQuery":::
Recall from viewing the twin properties earlier that Room0 has a temperature of 70, and Room1 has a temperature of 80. For this reason, only Room1 shows up in the results here.
@@ -286,9 +284,7 @@ Now, you'll see a **Patch Information** window where the patch code appears that
To verify that the graph successfully registered your update to the temperature for Room0, rerun the query from earlier to get all the twins in the environment with a temperature above 75.
-```SQL
-SELECT * FROM DigitalTwins T WHERE T.Temperature > 75
-```
+:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="TemperatureQuery":::
Now that the temperature of Room0 has been changed from 70 to 76, both twins should show up in the result.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/troubleshoot-error-403 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-error-403.md
@@ -39,7 +39,7 @@ The first solution is to verify that your Azure user has the _**Azure Digital Tw
Note that this role is different from... * the former name for this role during preview, *Azure Digital Twins Owner (Preview)* (the role is the same, but the name has changed) * the *Owner* role on the entire Azure subscription. *Azure Digital Twins Data Owner* is a role within Azure Digital Twins and is scoped to this individual Azure Digital Twins instance.
-* the *Owner* role in Azure Digital Twins. These are two distinct Azure Digital Twins management roles, and *Azure Digital Twins Data Owner* is the role that should be used for management during preview.
+* the *Owner* role in Azure Digital Twins. These are two distinct Azure Digital Twins management roles, and *Azure Digital Twins Data Owner* is the role that should be used for management.
#### Check current setup
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/troubleshoot-metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-metrics.md
@@ -77,9 +77,9 @@ Metrics having to do with billing:
| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions | | --- | --- | --- | --- | --- | --- |
-| BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter Id |
-| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this will be counted as additional messages in 1 KB increments (so a message between 1 and 2 KB will be counted as 2 messages, between 2 and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5KB in the response body, for example, will be billed as 2 operations. | Meter Id |
-| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There is also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?preserve-view=true&view=azure-dotnet-preview) | Meter Id |
+| BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter ID |
+| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this will be counted as additional messages in 1 KB increments (so a message between 1 and 2 KB will be counted as 2 messages, between 2 and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5KB in the response body, for example, will be billed as 2 operations. | Meter ID |
+| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There is also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?preserve-view=true&view=azure-dotnet) | Meter ID |
For more details on the way Azure Digital Twins is billed, see [*Azure Digital Twins pricing*](https://azure.microsoft.com/pricing/details/digital-twins/).
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/tutorial-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-code.md
@@ -58,9 +58,9 @@ This will create several files inside your directory, including one called *Prog
Keep the command window open, as you'll continue to use it throughout the tutorial.
-Next, **add two dependencies to your project** that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add each one to your project.
-* [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true). Add the latest version.
-* [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure. Add version 1.2.2.
+Next, **add two dependencies to your project** that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
+* [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
+* [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
## Get started with project code
@@ -74,29 +74,19 @@ In this section, you will begin writing the code for your new app project to wor
There is also a section showing the complete code at the end of the tutorial. You can use this as a reference to check your program as you go.
-To begin, open the file *Program.cs* in any code editor. You will see a minimal code template that looks like this:
-
-```csharp
-using System;
-
-namespace DigitalTwinsCodeTutorial
-{
- class Program
- {
- static void Main(string[] args)
- {
- Console.WriteLine("Hello World!");
- }
- }
-}
-```
+To begin, open the file *Program.cs* in any code editor. You will see a minimal code template that looks something like this:
+
+:::row:::
+ :::column:::
+ :::image type="content" source="media/tutorial-code/starter-template.png" alt-text="A snippet of sample code. There is one 'using System;' statement, a namespace called DigitalTwinsCodeTutorial; a class in the namespace called Program; and a Main method in the class with a standard signature of 'static void Main(string[] args)'. The main method contains a Hello World print statement." lightbox="media/tutorial-code/starter-template.png":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+:::row-end:::
First, add some `using` lines at the top of the code to pull in necessary dependencies.
-```csharp
-using Azure.DigitalTwins.Core;
-using Azure.Identity;
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Azure_Digital_Twins_dependencies":::
Next, you'll add code to this file to fill out some functionality.
@@ -109,12 +99,7 @@ In order to authenticate, you need the *hostName* of your Azure Digital Twins in
In *Program.cs*, paste the following code below the "Hello, World!" printout line in the `Main` method. Set the value of `adtInstanceUrl` to your Azure Digital Twins instance *hostName*.
-```csharp
-string adtInstanceUrl = "https://<your-Azure-Digital-Twins-instance-hostName>";
-var credential = new DefaultAzureCredential();
-DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credential);
-Console.WriteLine($"Service client created ΓÇô ready to go");
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Authentication_code":::
Save the file.
@@ -136,25 +121,7 @@ The first step in creating an Azure Digital Twins solution is defining at least
In the directory where you created your project, create a new *.json* file called *SampleModel.json*. Paste in the following file body:
-```json
-{
- "@id": "dtmi:example:SampleModel;1",
- "@type": "Interface",
- "displayName": "SampleModel",
- "contents": [
- {
- "@type": "Relationship",
- "name": "contains"
- },
- {
- "@type": "Property",
- "name": "data",
- "schema": "string"
- }
- ],
- "@context": "dtmi:dtdl:context;2"
-}
-```
+:::code language="json" source="~/digital-twins-docs-samples/models/SampleModel.json":::
> [!TIP] > If you're using Visual Studio for this tutorial, you may want to select the newly-created JSON file and set the *Copy to Output Directory* property in the Property inspector to *Copy if Newer* or *Copy Always*. This will enable Visual Studio to find the JSON file with the default path when you run the program with **F5** during the rest of the tutorial.
@@ -166,18 +133,11 @@ Next, add some more code to *Program.cs* to upload the model you've just created
First, add a few `using` statements to the top of the file:
-```csharp
-using System.Threading.Tasks;
-using System.IO;
-using System.Collections.Generic;
-using Azure;
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Model_dependencies":::
Next, prepare to use the asynchronous methods in the C# service SDK, by changing the `Main` method signature to allow for async execution.
-```csharp
-static async Task Main(string[] args)
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Async_signature":::
> [!NOTE] > Using `async` is not strictly required, as the SDK also provides synchronous versions of all calls. This tutorial practices using `async`.
@@ -186,15 +146,7 @@ Next comes the first bit of code that interacts with the Azure Digital Twins ser
Paste in the following code under the authorization code you added earlier.
-```csharp
-Console.WriteLine();
-Console.WriteLine($"Upload a model");
-var typeList = new List<string>();
-string dtdl = File.ReadAllText("SampleModel.json");
-typeList.Add(dtdl);
-// Upload the model to the service
-await client.CreateModelsAsync(typeList);
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp_excerpt_model.cs":::
In your command window, run the program with this command:
@@ -205,15 +157,7 @@ dotnet run
To add a print statement showing all models that have been successfully uploaded to the instance, add the following code right after the previous section:
-```csharp
-// Read a list of models back from the service
-Console.WriteLine("Models uploaded to the instance:");
-AsyncPageable<DigitalTwinsModelData> modelDataList = client.GetModelsAsync();
-await foreach (DigitalTwinsModelData md in modelDataList)
-{
- Console.WriteLine($"{md.Id}");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Print_model":::
**Before you run the program again to test this new code**, recall that the last time you ran the program, you uploaded your model already. Azure Digital Twins will not let you upload the same model twice, so if you attempt to upload the same model again, the program should throw an exception.
@@ -231,13 +175,7 @@ The next section talks about exceptions like this and how to handle them in your
To keep the program from crashing, you can add exception code around the model upload code. Wrap the existing client call `await client.CreateModelsAsync(typeList)` in a try/catch handler, like this:
-```csharp
-try {
- await client.CreateModelsAsync(typeList);
-} catch (RequestFailedException rex) {
- Console.WriteLine($"Load model: {rex.Status}:{rex.Message}");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Model_try_catch":::
Now, if you run the program with `dotnet run` in your command window now, you will see that you get an error code back. The output from the model creation code shows this error:
@@ -251,23 +189,7 @@ Now that you have uploaded a model to Azure Digital Twins, you can use this mode
Add the following code to the end of the `Main` method to create and initialize three digital twins based on this model.
-```csharp
-// Initialize twin data
-BasicDigitalTwin twinData = new BasicDigitalTwin();
-twinData.Metadata.ModelId = "dtmi:example:SampleModel;1";
-twinData.Contents.Add("data", $"Hello World!");
-
-string prefix="sampleTwin-";
-for(int i=0; i<3; i++) {
- try {
- twinData.Id = $"{prefix}{i}";
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>(twinData.Id, twinData);
- Console.WriteLine($"Created twin: {prefix}{i}");
- } catch(RequestFailedException rex) {
- Console.WriteLine($"Create twin error: {rex.Status}:{rex.Message}");
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Initialize_twins":::
In your command window, run the program with `dotnet run`. In the output, look for the print messages that *sampleTwin-0*, *sampleTwin-1*, and *sampleTwin-2* were created.
@@ -281,34 +203,11 @@ Next, you can create **relationships** between the twins you've created, to conn
Add a **new static method** to the `Program` class, underneath the `Main` method (the code now has two methods):
-```csharp
-public async static Task CreateRelationship(DigitalTwinsClient client, string srcId, string targetId)
-{
- var relationship = new BasicRelationship
- {
- TargetId = targetId,
- Name = "contains"
- };
-
- try
- {
- string relId = $"{srcId}-contains->{targetId}";
- await client.CreateOrReplaceRelationshipAsync(srcId, relId, relationship);
- Console.WriteLine("Created relationship successfully");
- }
- catch (RequestFailedException rex) {
- Console.WriteLine($"Create relationship error: {rex.Status}:{rex.Message}");
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Create_relationship":::
Next, add the following code to the end of the `Main` method, to call the `CreateRelationship` method and use the code you just wrote:
-```csharp
-// Connect the twins with relationships
-await CreateRelationship(client, "sampleTwin-0", "sampleTwin-1");
-await CreateRelationship(client, "sampleTwin-0", "sampleTwin-2");
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Use_create_relationship":::
In your command window, run the program with `dotnet run`. In the output, look for print statements saying that the two relationships were created successfully.
@@ -320,32 +219,15 @@ The next code you'll add allows you to see the list of relationships you've crea
Add the following **new method** to the `Program` class:
-```csharp
-public async static Task ListRelationships(DigitalTwinsClient client, string srcId)
-{
- try {
- AsyncPageable<BasicRelationship> results = client.GetRelationshipsAsync<BasicRelationship>(srcId);
- Console.WriteLine($"Twin {srcId} is connected to:");
- await foreach (BasicRelationship rel in results)
- {
- Console.WriteLine($" -{rel.Name}->{rel.TargetId}");
- }
- } catch (RequestFailedException rex) {
- Console.WriteLine($"Relationship retrieval error: {rex.Status}:{rex.Message}");
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="List_relationships":::
Then, add the following code to the end of the `Main` method to call the `ListRelationships` code:
-```csharp
-//List the relationships
-await ListRelationships(client, "sampleTwin-0");
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Use_list_relationships":::
In your command window, run the program with `dotnet run`. You should see a list of all the relationships you have created in an output statement that looks like this:
-:::image type="content" source= "media/tutorial-code/list-relationships.png" alt-text="Program output, showing a message that says 'Twin sampleTwin-0 is connected to: contains->sampleTwin-1, -contains->sampleTwin-2'":::
+:::image type="content" source= "media/tutorial-code/list-relationships.png" alt-text="Program output, showing a message that says 'Twin sampleTwin-0 is connected to: contains->sampleTwin-1, -contains->sampleTwin-2'" lightbox="media/tutorial-code/list-relationships.png":::
### Query digital twins
@@ -355,23 +237,11 @@ The last section of code to add in this tutorial runs a query against the Azure
Add this `using` statement to enable use of the `JsonSerializer` class to help present the digital twin information:
-```csharp
-using System.Text.Json;
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Query_dependencies":::
Then, add the following code to the end of the `Main` method:
-```csharp
-// Run a query for all twins
-string query = "SELECT * FROM digitaltwins";
-AsyncPageable<BasicDigitalTwin> result = client.QueryAsync<BasicDigitalTwin>(query);
-
-await foreach (BasicDigitalTwin twin in result)
-{
- Console.WriteLine(JsonSerializer.Serialize(twin));
- Console.WriteLine("---------------");
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs" id="Query_twins":::
In your command window, run the program with `dotnet run`. You should see all the digital twins in this instance in the output.
@@ -379,120 +249,8 @@ In your command window, run the program with `dotnet run`. You should see all th
At this point in the tutorial, you have a complete client app, capable of performing basic actions against Azure Digital Twins. For reference, the full code of the program in *Program.cs* is listed below:
-```csharp
-using System;
-using Azure.DigitalTwins.Core;
-using Azure.Identity;
-using System.Threading.Tasks;
-using System.IO;
-using System.Collections.Generic;
-using Azure;
-using System.Text.Json;
-
-namespace minimal
-{
- class Program
- {
- static async Task Main(string[] args)
- {
- Console.WriteLine("Hello World!");
-
- string adtInstanceUrl = "https://<your-Azure-Digital-Twins-instance-hostName>";
-
- var credential = new DefaultAzureCredential();
- DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credential);
- Console.WriteLine($"Service client created ΓÇô ready to go");
-
- Console.WriteLine();
- Console.WriteLine($"Upload a model");
- var typeList = new List<string>();
- string dtdl = File.ReadAllText("SampleModel.json");
- typeList.Add(dtdl);
-
- // Upload the model to the service
- try {
- await client.CreateModelsAsync(typeList);
- } catch (RequestFailedException rex) {
- Console.WriteLine($"Load model: {rex.Status}:{rex.Message}");
- }
- // Read a list of models back from the service
- Console.WriteLine("Models uploaded to the instance:");
- AsyncPageable<DigitalTwinsModelData> modelDataList = client.GetModelsAsync();
- await foreach (DigitalTwinsModelData md in modelDataList)
- {
- Console.WriteLine($"{md.Id}");
- }
-
- // Initialize twin data
- BasicDigitalTwin twinData = new BasicDigitalTwin();
- twinData.Metadata.ModelId = "dtmi:example:SampleModel;1";
- twinData.Contents.Add("data", $"Hello World!");
-
- string prefix="sampleTwin-";
- for(int i=0; i<3; i++) {
- try {
- twinData.Id = $"{prefix}{i}";
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>(twinData.Id, twinData);
- Console.WriteLine($"Created twin: {prefix}{i}");
- } catch(RequestFailedException rex) {
- Console.WriteLine($"Create twin error: {rex.Status}:{rex.Message}");
- }
- }
-
- // Connect the twins with relationships
- await CreateRelationship(client, "sampleTwin-0", "sampleTwin-1");
- await CreateRelationship(client, "sampleTwin-0", "sampleTwin-2");
-
- //List the relationships
- await ListRelationships(client, "sampleTwin-0");
-
- // Run a query for all twins
- string query = "SELECT * FROM digitaltwins";
- AsyncPageable<BasicDigitalTwin> result = client.QueryAsync<BasicDigitalTwin>(query);
-
- await foreach (BasicDigitalTwin twin in result)
- {
- Console.WriteLine(JsonSerializer.Serialize(twin));
- Console.WriteLine("---------------");
- }
- }
-
- public async static Task CreateRelationship(DigitalTwinsClient client, string srcId, string targetId)
- {
- var relationship = new BasicRelationship
- {
- TargetId = targetId,
- Name = "contains"
- };
-
- try
- {
- string relId = $"{srcId}-contains->{targetId}";
- await client.CreateOrReplaceRelationshipAsync(srcId, relId, relationship);
- Console.WriteLine("Created relationship successfully");
- }
- catch (RequestFailedException rex) {
- Console.WriteLine($"Create relationship error: {rex.Status}:{rex.Message}");
- }
- }
-
- public async static Task ListRelationships(DigitalTwinsClient client, string srcId)
- {
- try {
- AsyncPageable<BasicRelationship> results = client.GetRelationshipsAsync<BasicRelationship>(srcId);
- Console.WriteLine($"Twin {srcId} is connected to:");
- await foreach (BasicRelationship rel in results)
- {
- Console.WriteLine($" -{rel.Name}->{rel.TargetId}");
- }
- } catch (RequestFailedException rex) {
- Console.WriteLine($"Relationship retrieval error: {rex.Status}:{rex.Message}");
- }
- }
-
- }
-}
-```
+:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/fullClientApp.cs":::
+ ## Clean up resources The instance used in this tutorial can be reused in the next tutorial, [*Tutorial: Explore the basics with a sample client app*](tutorial-command-line-app.md). If you plan to continue to the next tutorial, you can keep the Azure Digital Twins instance you set up here.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/tutorial-command-line-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-app.md
@@ -52,27 +52,15 @@ Select *Room.json* to open it in the editing window, and change it in the follow
1. **Edit a property**. Change the name of the `Humidity` property to *HumidityLevel* (or something different if you'd like. If you use something different than *HumidityLevel*, remember what you used and continue using that instead of *HumidityLevel* throughout the tutorial). 1. **Add a property**. Underneath the `HumidityLevel` property that ends on line 15, paste the following code to add a `RoomName` property to the room:
- ```json
- ,
- {
- "@type": "Property",
- "name": "RoomName",
- "schema": "string"
- }
- ```
+ :::code language="json" source="~/digital-twins-docs-samples/models/Room.json" range="16-20":::
+ 1. **Add a relationship**. Underneath the `RoomName` property that you just added, paste the following code to add the ability for this type of twin to form *contains* relationships with other twins:
- ```json
- ,
- {
- "@type": "Relationship",
- "name": "contains"
- }
- ```
+ :::code language="json" source="~/digital-twins-docs-samples/models/Room.json" range="21-24":::
-When you are finished, the updated model should look like this:
+When you are finished, the updated model should match this:
-:::image type="content" source="media/tutorial-command-line-app/room-model.png" alt-text="Edited Room.json with updated version number, HumidityLevel and RoomName properties, and contains relationship" border="false":::
+:::code language="json" source="~/digital-twins-docs-samples/models/Room.json":::
Make sure to save the file before moving on.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/tutorial-end-to-end https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
@@ -90,10 +90,7 @@ Query
> > Here is the full query body to get all digital twins in your instance: >
-> ```sql
-> SELECT *
-> FROM DIGITALTWINS
-> ```
+> :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="GetAllTwins":::
After this, you can stop running the project. Keep the solution open in Visual Studio, though, as you'll continue using it throughout the tutorial.
dms https://docs.microsoft.com/en-us/azure/dms/quickstart-create-data-migration-service-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/quickstart-create-data-migration-service-portal.md
@@ -27,7 +27,7 @@ Open your web browser, navigate to the [Microsoft Azure portal](https://portal.a
The default view is your service dashboard. > [!NOTE]
-> You can create up to 10 instances of DMS per subscription. If you require a greater number of instances, please create a support ticket.
+> You can create up to 10 instances of DMS per subscription per region. If you require a greater number of instances, please create a support ticket.
## Register the resource provider
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-mysql-azure-mysql-online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-online.md
@@ -233,7 +233,7 @@ After the service is created, locate it within the Azure portal, open it, and th
![Map to target databases](media/tutorial-mysql-to-azure-mysql-online/dms-map-target-details.png) > [!NOTE]
- > Though you can select multiple databases in this step, each instance of Azure Database Migration Service supports up to four databases for concurrent migration. Also, there is a limit of two instances of Azure Database Migration Service per region in a subscription. For example, if you have 40 databases to migrate, you can only migrate eight of them concurrently, and only if you have created two instances of Azure Database Migration Service.
+ > Though you can select multiple databases in this step, each instance of Azure Database Migration Service supports up to 4 databases for concurrent migration. Also, there is a limit of 10 instances of Azure Database Migration Service per subscription per region. For example, if you have 80 databases to migrate, you can migrate 40 of them to the same region concurrently, but only if you have created 10 instances of the Azure Database Migration Service.
3. Select **Save**, on the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
event-grid https://docs.microsoft.com/en-us/azure/event-grid/handler-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/handler-functions.md
@@ -1,26 +1,26 @@
---
-title: Azure function as an event handler for Azure Event Grid events
-description: Describes how you can use Azure functions as event handlers for Event Grid events.
+title: Use a function in Azure as an event handler for Azure Event Grid events
+description: Describes how you can use functions created in and hosted by Azure Functions as event handlers for Event Grid events.
ms.topic: conceptual ms.date: 09/18/2020 ---
-# Azure function as an event handler for Event Grid events
+# Use a function as an event handler for Event Grid events
An event handler is the place where the event is sent. The handler takes an action to process the event. Several Azure services are automatically configured to handle events and **Azure Functions** is one of them.
-To use an Azure function as a handler for events, follow one of these approaches:
+To use a function in Azure as a handler for events, follow one of these approaches:
-- Use [Event Grid trigger](../azure-functions/functions-bindings-event-grid-trigger.md). Specify **Azure Function** as the **endpoint type**. Then, specify the Azure function app and the function that will handle events. -- Use [HTTP trigger](../azure-functions/functions-bindings-http-webhook.md). Specify **Web Hook** as the **endpoint type**. Then, specify the URL for the Azure function that will handle events.
+- Use [Event Grid trigger](../azure-functions/functions-bindings-event-grid-trigger.md). Specify **Azure Function** as the **endpoint type**. Then, specify the function app and the function that will handle events.
+- Use [HTTP trigger](../azure-functions/functions-bindings-http-webhook.md). Specify **Web Hook** as the **endpoint type**. Then, specify the URL for the function that will handle events.
We recommend that you use the first approach (Event Grid trigger) as it has the following advantages over the second approach: - Event Grid automatically validates Event Grid triggers. With generic HTTP triggers, you must implement the [validation response](webhook-event-delivery.md) yourself. - Event Grid automatically adjusts the rate at which events are delivered to a function triggered by an Event Grid event based on the perceived rate at which the function can process events. This rate match feature averts delivery errors that stem from the inability of a function to process events as the functionΓÇÖs event processing rate can vary over time. To improve efficiency at high throughput, enable batching on the event subscription. For more information, see [Enable batching](#enable-batching). > [!NOTE]
- > Currently, you can't use an Event Grid trigger for an Azure Functions app when the event is delivered in the **CloudEvents** schema. Instead, use an HTTP trigger.
+ > Currently, you can't use an Event Grid trigger for a function app when the event is delivered in the **CloudEvents** schema. Instead, use an HTTP trigger.
## Tutorials
@@ -76,4 +76,4 @@ You can use the [az eventgrid event-subscription create](/cli/azure/eventgrid/ev
You can use the [New-AzEventGridSubscription](/powershell/module/az.eventgrid/new-azeventgridsubscription) or [Update-AzEventGridSubscription](/powershell/module/az.eventgrid/update-azeventgridsubscription) cmdlet to configure batch-related settings using the following parameters: `-MaxEventsPerBatch` or `-PreferredBatchSizeInKiloBytes`. ## Next steps
-See the [Event handlers](event-handlers.md) article for a list of supported event handlers.
\ No newline at end of file
+See the [Event handlers](event-handlers.md) article for a list of supported event handlers.
event-grid https://docs.microsoft.com/en-us/azure/event-grid/onboard-partner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/onboard-partner.md
@@ -107,7 +107,6 @@ To complete the remaining steps, make sure you have:
1. In the **Channel details** section, do these steps: 1. For **Event channel name**, enter a name for the event channel. 1. Enter the **source**. See [Cloud Events 1.0 specifications](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) to get an idea of a suitable value for the source. Also, see [this Cloud Events schema example](cloud-event-schema.md#sample-event-using-cloudevents-schema).
- 1. Enter the source (WHAT IS IT?).
1. In the **Destination details** section, enter details for the destination partner topic that will be created for this event channel. 1. Enter the **ID of the subscription** in which the partner topic will be created. 1. Enter the **name of the resource group** in which the partner topic resource will be created.
event-grid https://docs.microsoft.com/en-us/azure/event-grid/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
@@ -1,7 +1,7 @@
--- title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: reference ms.custom: subject-policy-reference ---
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-controls-policy.md
@@ -1,7 +1,7 @@
--- title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample author: spelluru ms.author: spelluru
firewall-manager https://docs.microsoft.com/en-us/azure/firewall-manager/secure-hybrid-network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/secure-hybrid-network.md
@@ -102,7 +102,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
## Create the firewall hub virtual network > [!NOTE]
-> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](../firewall/firewall-faq.md#why-does-azure-firewall-need-a-26-subnet-size).
+> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](../firewall/firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
1. From the Azure portal home page, select **Create a resource**. 2. Under **Networking**, select **Virtual network**.
firewall https://docs.microsoft.com/en-us/azure/firewall/deploy-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/deploy-cli.md
@@ -63,7 +63,7 @@ az group create --name Test-FW-RG --location eastus
This virtual network has three subnets. > [!NOTE]
-> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.md#why-does-azure-firewall-need-a-26-subnet-size).
+> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
```azurecli-interactive az network vnet create \
firewall https://docs.microsoft.com/en-us/azure/firewall/deploy-ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/deploy-ps.md
@@ -64,7 +64,7 @@ New-AzResourceGroup -Name Test-FW-RG -Location "East US"
This virtual network has three subnets: > [!NOTE]
-> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.md#why-does-azure-firewall-need-a-26-subnet-size).
+> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
```azurepowershell $Bastionsub = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.0.0/27
firewall https://docs.microsoft.com/en-us/azure/firewall/firewall-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-faq.md deleted file mode 100644
@@ -1,223 +0,0 @@
-title: Azure Firewall FAQ
-description: FAQ for Azure Firewall. A managed, cloud-based network security service that protects your Azure Virtual Network resources.
-services: firewall
-author: vhorne
-ms.service: firewall
-ms.topic: conceptual
-ms.date: 08/13/2020
-ms.author: victorh
-
-# Azure Firewall FAQ
-
-## What is Azure Firewall?
-
-Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It's a fully stateful firewall-as-a-service with built-in high availability and unrestricted cloud scalability. You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks.
-
-## What capabilities are supported in Azure Firewall?
-
-To learn about Azure Firewall features, see [Azure Firewall features](features.md).
-
-## What is the typical deployment model for Azure Firewall?
-
-You can deploy Azure Firewall on any virtual network, but customers typically deploy it on a central virtual network and peer other virtual networks to it in a hub-and-spoke model. You can then set the default route from the peered virtual networks to point to this central firewall virtual network. Global VNet peering is supported, but it isn't recommended because of potential performance and latency issues across regions. For best performance, deploy one firewall per region.
-
-The advantage of this model is the ability to centrally exert control on multiple spoke VNETs across different subscriptions. There are also cost savings as you don't need to deploy a firewall in each VNet separately. The cost savings should be measured versus the associate peering cost based on the customer traffic patterns.
-
-## How can I install the Azure Firewall?
-
-You can set up Azure Firewall by using the Azure portal, PowerShell, REST API, or by using templates. See [Tutorial: Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md) for step-by-step instructions.
-
-## What are some Azure Firewall concepts?
-
-Azure Firewall supports rules and rule collections. A rule collection is a set of rules that share the same order and priority. Rule collections are executed in order of their priority. Network rule collections are higher priority than application rule collections, and all rules are terminating.
-
-There are three types of rule collections:
-
-* *Application rules*: Configure fully qualified domain names (FQDNs) that can be accessed from a subnet.
-* *Network rules*: Configure rules that contain source addresses, protocols, destination ports, and destination addresses.
-* *NAT rules*: Configure DNAT rules to allow incoming Internet connections.
-
-## Does Azure Firewall support inbound traffic filtering?
-
-Azure Firewall supports inbound and outbound filtering. Inbound protection is typically used for non-HTTP/S protocols. For example RDP, SSH, and FTP protocols. For best inbound HTTP/S protection, use a web application firewall such as [Azure Web Application Firewall (WAF)](../web-application-firewall/overview.md).
-
-## Which logging and analytics services are supported by the Azure Firewall?
-
-Azure Firewall is integrated with Azure Monitor for viewing and analyzing firewall logs. Logs can be sent to Log Analytics, Azure Storage, or Event Hubs. They can be analyzed in Log Analytics or by different tools such as Excel and Power BI. For more information, see [Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md).
-
-## How does Azure Firewall work differently from existing services such as NVAs in the marketplace?
-
-Azure Firewall is a basic firewall service that can address certain customer scenarios. It's expected that you'll have a mix of third-party NVAs and Azure Firewall. Working better together is a core priority.
-
-## What is the difference between Application Gateway WAF and Azure Firewall?
-
-The Web Application Firewall (WAF) is a feature of Application Gateway that provides centralized inbound protection of your web applications from common exploits and vulnerabilities. Azure Firewall provides inbound protection for non-HTTP/S protocols (for example, RDP, SSH, FTP), outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.
-
-## What is the difference between Network Security Groups (NSGs) and Azure Firewall?
-
-The Azure Firewall service complements network security group functionality. Together, they provide better "defense-in-depth" network security. Network security groups provide distributed network layer traffic filtering to limit traffic to resources within virtual networks in each subscription. Azure Firewall is a fully stateful, centralized network firewall as-a-service, which provides network- and application-level protection across different subscriptions and virtual networks.
-
-## Are Network Security Groups (NSGs) supported on the AzureFirewallSubnet?
-
-Azure Firewall is a managed service with multiple protection layers, including platform protection with NIC level NSGs (not viewable). Subnet level NSGs aren't required on the AzureFirewallSubnet, and are disabled to ensure no service interruption.
-
-## How do I set up Azure Firewall with my service endpoints?
-
-For secure access to PaaS services, we recommend service endpoints. You can choose to enable service endpoints in the Azure Firewall subnet and disable them on the connected spoke virtual networks. This way you benefit from both features: service endpoint security and central logging for all traffic.
-
-## What is the pricing for Azure Firewall?
-
-See [Azure Firewall Pricing](https://azure.microsoft.com/pricing/details/azure-firewall/).
-
-## How can I stop and start Azure Firewall?
-
-You can use Azure PowerShell *deallocate* and *allocate* methods.
-
-For example:
-
-```azurepowershell
-# Stop an existing firewall
-
-$azfw = Get-AzFirewall -Name "FW Name" -ResourceGroupName "RG Name"
-$azfw.Deallocate()
-Set-AzFirewall -AzureFirewall $azfw
-```
-
-```azurepowershell
-# Start a firewall
-
-$azfw = Get-AzFirewall -Name "FW Name" -ResourceGroupName "RG Name"
-$vnet = Get-AzVirtualNetwork -ResourceGroupName "RG Name" -Name "VNet Name"
-$publicip1 = Get-AzPublicIpAddress -Name "Public IP1 Name" -ResourceGroupName "RG Name"
-$publicip2 = Get-AzPublicIpAddress -Name "Public IP2 Name" -ResourceGroupName "RG Name"
-$azfw.Allocate($vnet,@($publicip1,$publicip2))
-
-Set-AzFirewall -AzureFirewall $azfw
-```
-
-> [!NOTE]
-> You must reallocate a firewall and public IP to the original resource group and subscription.
-
-## What are the known service limits?
-
-For Azure Firewall service limits, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits).
-
-## Can Azure Firewall in a hub virtual network forward and filter network traffic between two spoke virtual networks?
-
-Yes, you can use Azure Firewall in a hub virtual network to route and filter traffic between two spoke virtual network. Subnets in each of the spoke virtual networks must have a UDR pointing to the Azure Firewall as a default gateway for this scenario to work properly.
-
-## Can Azure Firewall forward and filter network traffic between subnets in the same virtual network or peered virtual networks?
-
-Yes. However, configuring the UDRs to redirect traffic between subnets in the same VNET requires additional attention. While using the VNET address range as a target prefix for the UDR is sufficient, this also routes all traffic from one machine to another machine in the same subnet through the Azure Firewall instance. To avoid this, include a route for the subnet in the UDR with a next hop type of **VNET**. Managing these routes might be cumbersome and prone to error. The recommended method for internal network segmentation is to use Network Security Groups, which don't require UDRs.
-
-## Does Azure Firewall outbound SNAT between private networks?
-
-Azure Firewall doesn't SNAT when the destination IP address is a private IP range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918). If your organization uses a public IP address range for private networks, Azure Firewall SNATs the traffic to one of the firewall private IP addresses in AzureFirewallSubnet. You can configure Azure Firewall to **not** SNAT your public IP address range. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
-
-## Is forced tunneling/chaining to a Network Virtual Appliance supported?
-
-Forced tunneling is supported when you create a new firewall. You can't configure an existing firewall for forced tunneling. For more information, see [Azure Firewall forced tunneling](forced-tunneling.md).
-
-Azure Firewall must have direct Internet connectivity. If your AzureFirewallSubnet learns a default route to your on-premises network via BGP, you must override this with a 0.0.0.0/0 UDR with the **NextHopType** value set as **Internet** to maintain direct Internet connectivity.
-
-If your configuration requires forced tunneling to an on-premises network and you can determine the target IP prefixes for your Internet destinations, you can configure these ranges with the on-premises network as the next hop via a user defined route on the AzureFirewallSubnet. Or, you can use BGP to define these routes.
-
-## Are there any firewall resource group restrictions?
-
-Yes. The firewall, VNet, and the public IP address all must be in the same resource group.
-
-## When configuring DNAT for inbound Internet network traffic, do I also need to configure a corresponding network rule to allow that traffic?
-
-No. NAT rules implicitly add a corresponding network rule to allow the translated traffic. You can override this behavior by explicitly adding a network rule collection with deny rules that match the translated traffic. To learn more about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](rule-processing.md).
-
-## How do wildcards work in an application rule target FQDN?
-
-Wildcards currently can only be used on the left side of the FQDN. For example, ***.contoso.com** and ***contoso.com**.
-
-If you configure ***.contoso.com**, it allows *anyvalue*.contoso.com, but not contoso.com (the domain apex). If you want to allow the domain apex, you must explicitly configure it as a target FQDN.
-
-## What does *Provisioning state: Failed* mean?
-
-Whenever a configuration change is applied, Azure Firewall attempts to update all its underlying backend instances. In rare cases, one of these backend instances may fail to update with the new configuration and the update process stops with a failed provisioning state. Your Azure Firewall is still operational, but the applied configuration may be in an inconsistent state, where some instances have the previous configuration where others have the updated rule set. If this happens, try updating your configuration one more time until the operation succeeds and your Firewall is in a *Succeeded* provisioning state.
-
-## How does Azure Firewall handle planned maintenance and unplanned failures?
-Azure Firewall consists of several backend nodes in an active-active configuration. For any planned maintenance, we have connection draining logic to gracefully update nodes. Updates are planned during non-business hours for each of the Azure regions to further limit risk of disruption. For unplanned issues, we instantiate a new node to replace the failed node. Connectivity to the new node is typically reestablished within 10 seconds from the time of the failure.
-
-## How does connection draining work?
-
-For any planned maintenance, connection draining logic gracefully updates backend nodes. Azure Firewall waits 90 seconds for existing connections to close. If needed, clients can automatically re-establish connectivity to another backend node.
-
-## Is there a character limit for a firewall name?
-
-Yes. There's a 50 character limit for a firewall name.
-
-## Why does Azure Firewall need a /26 subnet size?
-
-Azure Firewall must provision more virtual machine instances as it scales. A /26 address space ensures that the firewall has enough IP addresses available to accommodate the scaling.
-
-## Does the firewall subnet size need to change as the service scales?
-
-No. Azure Firewall doesn't need a subnet bigger than /26.
-
-## How can I increase my firewall throughput?
-
-Azure Firewall's initial throughput capacity is 2.5 - 3 Gbps and it scales out to 30 Gbps. It scales out automatically based on CPU usage and throughput.
-
-## How long does it take for Azure Firewall to scale out?
-
-Azure Firewall gradually scales when average throughput or CPU consumption is at 60%. A default deployment maximum throughput is approximately 2.5 - 3 Gbps and starts to scale out when it reaches 60% of that number. Scale out takes five to seven minutes.
-
-When performance testing, make sure you test for at least 10 to 15 minutes, and start new connections to take advantage of newly created Firewall nodes.
-
-## Does Azure Firewall allow access to Active Directory by default?
-
-No. Azure Firewall blocks Active Directory access by default. To allow access, configure the AzureActiveDirectory service tag. For more information, see [Azure Firewall service tags](service-tags.md).
-
-## Can I exclude a FQDN or an IP address from Azure Firewall Threat Intelligence based filtering?
-
-Yes, you can use Azure PowerShell to do it:
-
-```azurepowershell
-# Add a Threat Intelligence allow list to an Existing Azure Firewall
-
-## Create the allow list with both FQDN and IPAddresses
-
-$fw = Get-AzFirewall -Name "Name_of_Firewall" -ResourceGroupName "Name_of_ResourceGroup"
-$fw.ThreatIntelWhitelist = New-AzFirewallThreatIntelWhitelist `
- -FQDN @("fqdn1", "fqdn2", …) -IpAddress @("ip1", "ip2", …)
-
-## Or Update FQDNs and IpAddresses separately
-
-$fw = Get-AzFirewall -Name $firewallname -ResourceGroupName $RG
-$fw.ThreatIntelWhitelist.IpAddresses = @($fw.ThreatIntelWhitelist.IpAddresses + $ipaddresses)
-$fw.ThreatIntelWhitelist.fqdns = @($fw.ThreatIntelWhitelist.fqdns + $fqdns)
--
-Set-AzFirewall -AzureFirewall $fw
-```
-
-## Why can a TCP ping and similar tools successfully connect to a target FQDN even when no rule on Azure Firewall allows that traffic?
-
-A TCP ping isn't actually connecting to the target FQDN. This happens because Azure Firewall's transparent proxy listens on port 80/443 for outbound traffic. The TCP ping establishes a connection with the firewall, which then drops the packet. This behavior doesn't have any security impact. However, to avoid confusion we're investigating potential changes to this behavior.
-
-## Are there limits for the number of IP addresses supported by IP Groups?
-
-Yes. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits)
-
-## Can I move an IP Group to another resource group?
-
-No, moving an IP Group to another resource group isn't currently supported.
-
-## What is the TCP Idle Timeout for Azure Firewall?
-
-A standard behavior of a network firewall is to ensure TCP connections are kept alive and to promptly close them if there's no activity. Azure Firewall TCP Idle Timeout is four minutes. This setting isn't configurable. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained. A common practice is to use a TCP keep-alive. This practice keeps the connection active for a longer period. For more information, see the [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive?view=netcore-3.1#System_Net_ServicePoint_SetTcpKeepAlive_System_Boolean_System_Int32_System_Int32_).
-
-## Can I deploy Azure Firewall without a public IP address?
-
-No, currently you must deploy Azure Firewall with a public IP address.
-
-## Where does Azure Firewall store customer data?
-
-Azure Firewall doesn't move or store customer data out of the region it's deployed in.
\ No newline at end of file
firewall https://docs.microsoft.com/en-us/azure/firewall/tutorial-firewall-deploy-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/tutorial-firewall-deploy-portal.md
@@ -68,7 +68,7 @@ The resource group contains all the resources for the tutorial.
This VNet will contain three subnets. > [!NOTE]
-> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.md#why-does-azure-firewall-need-a-26-subnet-size).
+> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
1. On the Azure portal menu or from the **Home** page, select **Create a resource**. 1. Select **Networking** > **Virtual network**.
firewall https://docs.microsoft.com/en-us/azure/firewall/tutorial-firewall-dnat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/tutorial-firewall-dnat.md
@@ -62,7 +62,7 @@ First, create the VNets and then peer them.
The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet. > [!NOTE]
- > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.md#why-does-azure-firewall-need-a-26-subnet-size).
+ > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
10. For **Address range**, type **10.0.1.0/26**. 11. Use the other default settings, and then select **Create**.
firewall https://docs.microsoft.com/en-us/azure/firewall/tutorial-hybrid-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/tutorial-hybrid-portal.md
@@ -79,7 +79,7 @@ First, create the resource group to contain the resources for this tutorial:
Now, create the VNet: > [!NOTE]
-> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.md#why-does-azure-firewall-need-a-26-subnet-size).
+> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
1. From the Azure portal home page, select **Create a resource**. 2. Under **Networking**, select **Virtual network**.
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/dod-impact-level-4/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/control-mapping.md
@@ -1,7 +1,7 @@
--- title: DoD Impact Level 4 blueprint sample controls description: Control mapping of the DoD Impact Level 4 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Control mapping of the DoD Impact Level 4 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/dod-impact-level-4/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/deploy.md
@@ -1,7 +1,7 @@
--- title: DoD Impact Level 4 blueprint sample description: Deploy steps for the DoD Impact Level 4 blueprint sample including blueprint artifact parameter details.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Deploy the DoD Impact Level 4 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/dod-impact-level-4/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/index.md
@@ -1,7 +1,7 @@
--- title: DoD Impact Level 4 blueprint sample overview description: Overview of the DoD Impact Level 4 sample. This blueprint sample helps customers assess specific DoD Impact Level 4 controls.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Overview of the DoD Impact Level 4 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/dod-impact-level-5/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/control-mapping.md
@@ -1,7 +1,7 @@
--- title: DoD Impact Level 5 blueprint sample controls description: Control mapping of the DoD Impact Level 5 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 09/17/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Control mapping of the DoD Impact Level 5 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/dod-impact-level-5/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/deploy.md
@@ -1,7 +1,7 @@
--- title: DoD Impact Level 5 blueprint sample description: Deploy steps for the DoD Impact Level 5 blueprint sample including blueprint artifact parameter details.
-ms.date: 09/17/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Deploy the DoD Impact Level 5 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/dod-impact-level-5/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/index.md
@@ -1,7 +1,7 @@
--- title: DoD Impact Level 5 blueprint sample overview description: Overview of the DoD Impact Level 5 sample. This blueprint sample helps customers assess specific DoD Impact Level 5 controls.
-ms.date: 09/17/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Overview of the DoD Impact Level 5 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/fedramp-h/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/control-mapping.md
@@ -1,7 +1,7 @@
--- title: FedRAMP High blueprint sample controls description: Control mapping of the FedRAMP High blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Control mapping of the FedRAMP High blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/fedramp-h/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/deploy.md
@@ -1,7 +1,7 @@
--- title: Deploy FedRAMP High blueprint sample description: Deploy steps for the FedRAMP High blueprint sample including blueprint artifact parameter details.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Deploy the FedRAMP High blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/fedramp-h/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/index.md
@@ -1,7 +1,7 @@
--- title: FedRAMP High blueprint sample overview description: Overview of the FedRAMP High blueprint sample. This blueprint sample helps customers assess specific FedRAMP High controls.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Overview of the FedRAMP High blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/fedramp-m/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/control-mapping.md
@@ -1,7 +1,7 @@
--- title: FedRAMP Moderate blueprint sample controls description: Control mapping of the FedRAMP Moderate blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Control mapping of the FedRAMP Moderate blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/fedramp-m/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/deploy.md
@@ -1,7 +1,7 @@
--- title: Deploy FedRAMP Moderate blueprint sample description: Deploy steps for the FedRAMP Moderate blueprint sample including blueprint artifact parameter details.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Deploy the FedRAMP Moderate blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/fedramp-m/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/index.md
@@ -1,7 +1,7 @@
--- title: FedRAMP Moderate blueprint sample overview description: Overview of the FedRAMP Moderate blueprint sample. This blueprint sample helps customers assess specific FedRAMP Moderate controls.
-ms.date: 10/26/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Overview of the FedRAMP Moderate blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/irs-1075/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/control-mapping.md
@@ -1,7 +1,7 @@
--- title: IRS 1075 blueprint sample controls description: Control mapping of the IRS 1075 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assists with assessment.
-ms.date: 08/19/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Control mapping of the IRS 1075 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/irs-1075/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/deploy.md
@@ -1,7 +1,7 @@
--- title: Deploy IRS 1075 blueprint sample description: Deploy steps for the IRS 1075 (Rev.11-2016) blueprint sample including blueprint artifact parameter details.
-ms.date: 08/19/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Deploy the IRS 1075 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/irs-1075/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/index.md
@@ -1,7 +1,7 @@
--- title: IRS 1075 blueprint sample overview description: Overview of the IRS 1075 blueprint sample. This blueprint sample helps customers assess specific IRS 1075 controls.
-ms.date: 08/19/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Overview of the IRS 1075 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/media/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/control-mapping.md
@@ -1,7 +1,7 @@
--- title: Media blueprint sample controls description: Control mapping of the Media blueprint samples. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 08/13/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Control mapping of the Media blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/media/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/deploy.md
@@ -1,7 +1,7 @@
--- title: Deploy Media blueprint sample description: Deploy steps for the Media blueprint sample including blueprint artifact parameter details.
-ms.date: 08/13/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Deploy the Media blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/media/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/index.md
@@ -1,7 +1,7 @@
--- title: Media blueprint sample overview description: Overview of the Media blueprint sample. This blueprint sample helps customers assess specific Media controls.
-ms.date: 08/13/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Overview of the Media blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/pci-dss-3.2.1/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
@@ -1,7 +1,7 @@
--- title: PCI-DSS v3.2.1 blueprint sample controls description: Control mapping of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample to Azure Policy and Azure RBAC.
-ms.date: 08/19/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Control mapping of the PCI-DSS v3.2.1 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/pci-dss-3.2.1/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/deploy.md
@@ -1,7 +1,7 @@
--- title: Deploy PCI-DSS v3.2.1 blueprint sample description: Deploy steps for the Payment Card Industry Data Security Standard v3.2.1 blueprint sample including blueprint artifact parameter details.
-ms.date: 08/19/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Deploy the PCI-DSS v3.2.1 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/pci-dss-3.2.1/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/index.md
@@ -1,7 +1,7 @@
--- title: PCI-DSS v3.2.1 blueprint sample overview description: Overview of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample. This blueprint sample helps customers assess specific controls.
-ms.date: 08/19/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Overview of the PCI-DSS v3.2.1 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/swift-2020/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/control-mapping.md
@@ -1,7 +1,7 @@
--- title: SWIFT CSP-CSCF v2020 blueprint sample controls description: Control mapping of the SWIFT CSP-CSCF v2020 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 08/18/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Control mapping of the SWIFT CSP-CSCF v2020 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/swift-2020/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/deploy.md
@@ -1,7 +1,7 @@
--- title: Deploy SWIFT CSP-CSCF v2020 blueprint sample description: Deploy steps for the SWIFT CSP-CSCF v2020 blueprint sample including blueprint artifact parameter details.
-ms.date: 08/18/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Deploy the SWIFT CSP-CSCF v2020 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/swift-2020/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/index.md
@@ -1,7 +1,7 @@
--- title: SWIFT CSP-CSCF v2020 blueprint sample overview description: Overview of the SWIFT CSP-CSCF v2020 blueprint sample. This blueprint sample helps customers assess specific SWIFT CSP-CSCF controls.
-ms.date: 08/18/2020
+ms.date: 01/08/2021
ms.topic: sample --- # Overview of the SWIFT CSP-CSCF v2020 blueprint sample
governance https://docs.microsoft.com/en-us/azure/governance/policy/how-to/programmatically-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/programmatically-create.md
@@ -115,7 +115,7 @@ HTTP requests.
- Management group - `/providers/Microsoft.Management/managementGroups/{mgName}` For more information about managing resource policies using the Resource Manager PowerShell
-module, see [Az.Resources](/powershell/module/az.resources/#policies).
+module, see [Az.Resources](/powershell/module/az.resources/#policy).
### Create and assign a policy definition using ARMClient
@@ -281,7 +281,7 @@ For more information about how you can manage resource policies with Azure CLI,
Review the following articles for more information about the commands and queries in this article. - [Azure REST API Resources](/rest/api/resources/)-- [Azure PowerShell Modules](/powershell/module/az.resources/#policies)
+- [Azure PowerShell Modules](/powershell/module/az.resources/#policy)
- [Azure CLI Policy Commands](/cli/azure/policy) - [Azure Policy Insights resource provider REST API reference](/rest/api/policy-insights) - [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/azure-security-benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
@@ -1,7 +1,7 @@
--- title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample ms.custom: generated ---
@@ -18,7 +18,7 @@ The following mappings are to the **Azure Security Benchmark** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
-Then, find and select the **Azure Security Benchmark** Regulatory Compliance built-in
+Then, find and select the **Azure Security Benchmark v1** Regulatory Compliance built-in
initiative definition. This built-in initiative is deployed as part of the
@@ -45,18 +45,18 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
|[All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[App Service should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |This policy audits any App Service not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) |
-|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
-|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
|[Key Vault should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea4d6841-2173-4317-9747-ff522a45120f) |This policy audits any Key Vault not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_KeyVault_Audit.json) |
-|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
|[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) | |[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) | |[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) |
@@ -99,10 +99,10 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
|[All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
-|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
-|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
### Record network packets and flow logs
@@ -138,7 +138,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---| |[Audit Windows machines on which the Log Analytics agent is not connected as expected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6265018c-d7e2-432f-a75d-094d5f6f4465) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the agent is not installed, or if it is installed but the COM object AgentConfigManager.MgmtSvcCfg returns that it is registered to a workspace other than the ID specified in the policy parameter. |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsLogAnalyticsAgentConnection_AINE.json) |
-|[Automatic provisioning of the Log Analytics monitoring agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |Enable automatic provisioning of the Log Analytics monitoring agent in order to collect security data |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
+|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) | |[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | |[The Log Analytics agent should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) |
@@ -152,7 +152,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---| |[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
-|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server, except Synapse, and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
+|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
|[Diagnostic logs in App Services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) | |[Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) | |[Diagnostic logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
@@ -175,7 +175,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---| |[Audit Windows machines on which the Log Analytics agent is not connected as expected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6265018c-d7e2-432f-a75d-094d5f6f4465) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the agent is not installed, or if it is installed but the COM object AgentConfigManager.MgmtSvcCfg returns that it is registered to a workspace other than the ID specified in the policy parameter. |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsLogAnalyticsAgentConnection_AINE.json) |
-|[Automatic provisioning of the Log Analytics monitoring agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |Enable automatic provisioning of the Log Analytics monitoring agent in order to collect security data |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
+|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
|[The Log Analytics agent should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | |[The Log Analytics agent should be installed on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
@@ -186,7 +186,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[SQL servers should be configured with auditing retention days greater than 90 days.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |Audit SQL servers configured with an auditing retention period of less than 90 days. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+|[SQL servers should be configured with 90 days auditing retention or higher.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |SQL servers should be configured with 90 days auditing retention or higher. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
### Enable alerts for anomalous activity
@@ -205,9 +205,9 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
|[Microsoft Antimalware for Azure should be configured to automatically update protection signatures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc43e4a30-77cb-48ab-a4dd-93f175c63b57) |This policy audits any Windows virtual machine not configured with automatic update of Microsoft Antimalware protection signatures. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VirtualMachines_AntiMalwareAutoUpdate_AuditIfNotExists.json) |
-|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
## Identity and Access Control
@@ -218,10 +218,10 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
-|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
-|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
-|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
### Use dedicated administrative accounts
@@ -230,11 +230,11 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
|[Audit Windows machines missing any of specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F30f71ea1-ac77-4f26-9fc5-2d926bbd4ba7) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the local Administrators group does not contain one or more members that are listed in the policy parameter. |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToInclude_AINE.json) | |[Audit Windows machines that have extra accounts in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d2a3320-2a72-4c67-ac5f-caa40fbee2b2) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the local Administrators group contains members that are not listed in the policy parameter. |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembers_AINE.json) | |[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) |
-|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
+|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
### Use multi-factor authentication for all Azure Active Directory based access
@@ -243,9 +243,9 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
-|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
-|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
### Use Azure Active Directory
@@ -264,11 +264,11 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
-|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
-|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
-|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
-|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
## Data Protection
@@ -279,7 +279,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Sensitive data in your SQL databases should be classified](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc9835f2-9f6b-4cc8-ab4a-f8ef615eb349) |Azure Security Center monitors the data discovery and classification scan results for your SQL databases and provides recommendations to classify the sensitive data in your databases for better monitoring and security |AuditIfNotExists, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbDataClassification_Audit.json) |
+|[Sensitive data in your SQL databases should be classified](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc9835f2-9f6b-4cc8-ab4a-f8ef615eb349) |Azure Security Center monitors the data discovery and classification scan results for your SQL databases and provides recommendations to classify the sensitive data in your databases for better monitoring and security |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbDataClassification_Audit.json) |
### Encrypt all sensitive information in transit
@@ -311,7 +311,7 @@ This built-in initiative is deployed as part of the
|---|---|---|---| |[Advanced data security should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Advanced data security should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
-|[Sensitive data in your SQL databases should be classified](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc9835f2-9f6b-4cc8-ab4a-f8ef615eb349) |Azure Security Center monitors the data discovery and classification scan results for your SQL databases and provides recommendations to classify the sensitive data in your databases for better monitoring and security |AuditIfNotExists, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbDataClassification_Audit.json) |
+|[Sensitive data in your SQL databases should be classified](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc9835f2-9f6b-4cc8-ab4a-f8ef615eb349) |Azure Security Center monitors the data discovery and classification scan results for your SQL databases and provides recommendations to classify the sensitive data in your databases for better monitoring and security |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbDataClassification_Audit.json) |
### Use Azure RBAC to control access to resources
@@ -333,8 +333,8 @@ This built-in initiative is deployed as part of the
|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | |[Disk encryption should be applied on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |Virtual machines without an enabled disk encryption will be monitored by Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | |[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) |
-|[SQL Managed Instance TDE protector should be encrypted with your own key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Transparent Data Encryption (TDE) with your own key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
-|[SQL server TDE protector should be encrypted with your own key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0d134df8-db83-46fb-ad72-fe0c9428c8dd) |Transparent Data Encryption (TDE) with your own key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
+|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
+|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0d134df8-db83-46fb-ad72-fe0c9428c8dd) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) | |[Unattached disks should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c89a2e5-7285-40fe-afe0-ae8654b92fb2) |This policy audits any unattached disk without encryption enabled. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/UnattachedDisk_Encryption_Audit.json) |
@@ -367,8 +367,8 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
-|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
### Deploy automated third-party software patch management solution
@@ -394,11 +394,11 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
-|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
-|[Vulnerabilities on your SQL databases should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor Vulnerability Assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
-|[Vulnerabilities should be remediated by a Vulnerability Assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F760a85ff-6162-42b3-8d70-698e268f648c) |Monitors vulnerabilities detected by Vulnerability Assessment solution and VMs without a Vulnerability Assessment solution in Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VMVulnerabilities_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerabilities on your SQL databases should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor Vulnerability Assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[Vulnerabilities should be remediated by a Vulnerability Assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F760a85ff-6162-42b3-8d70-698e268f648c) |Monitors vulnerabilities detected by Vulnerability Assessment solution and VMs without a Vulnerability Assessment solution in Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VMVulnerabilities_Audit.json) |
## Inventory and Asset Management
@@ -409,7 +409,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
### Use only approved Azure services
@@ -428,7 +428,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
## Secure Configuration
@@ -439,9 +439,9 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
-|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
### Implement automated configuration monitoring for operating systems
@@ -450,9 +450,9 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
-|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
### Manage Azure secrets securely
@@ -461,7 +461,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Key vault should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization may potentially be able to gain access to delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
+|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
### Manage identities securely and automatically
@@ -483,8 +483,8 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
-|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
### Ensure anti-malware software and signatures are updated
@@ -508,7 +508,7 @@ This built-in initiative is deployed as part of the
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
-|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
### Perform complete system backups and backup any customer managed keys
@@ -521,7 +521,7 @@ This built-in initiative is deployed as part of the
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
-|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
### Ensure protection of backups and customer managed keys
@@ -530,7 +530,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Key vault should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization may potentially be able to gain access to delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
+|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
## Incident Response
@@ -541,8 +541,8 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[A security contact email address should be provided for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |Enter an email address to receive notifications when Azure Security Center detects compromised resources |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
|[A security contact phone number should be provided for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4d66858-c922-44e3-9566-5cdb7a7be744) |Enter a phone number to receive notifications when Azure Security Center detects compromised resources |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_phone_number.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
> [!NOTE] > Availability of specific Azure Policy definitions may vary in Azure Government and other national
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/built-in-initiatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-initiatives.md
@@ -1,7 +1,7 @@
--- title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample ms.custom: generated ---
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/built-in-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
@@ -1,7 +1,7 @@
--- title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample ms.custom: generated ---
@@ -47,6 +47,10 @@ side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-azure-data-explorer](../../../../includes/policy/reference/bycat/policies-azure-data-explorer.md)]
+## Azure Stack Edge
+
+[!INCLUDE [azure-policy-reference-policies-azure-stack-edge](../../../../includes/policy/reference/bycat/policies-azure-stack-edge.md)]
+ ## Backup [!INCLUDE [azure-policy-reference-policies-backup](../../../../includes/policy/reference/bycat/policies-backup.md)]
@@ -55,6 +59,10 @@ side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-batch](../../../../includes/policy/reference/bycat/policies-batch.md)]
+## Bot Services
+
+[!INCLUDE [azure-policy-reference-policies-bot-services](../../../../includes/policy/reference/bycat/policies-bot-services.md)]
+ ## Cache [!INCLUDE [azure-policy-reference-policies-cache](../../../../includes/policy/reference/bycat/policies-cache.md)]
@@ -79,6 +87,10 @@ side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-custom-provider](../../../../includes/policy/reference/bycat/policies-custom-provider.md)]
+## Data Box
+
+[!INCLUDE [azure-policy-reference-policies-data-box](../../../../includes/policy/reference/bycat/policies-data-box.md)]
+ ## Data Lake [!INCLUDE [azure-policy-reference-policies-data-lake](../../../../includes/policy/reference/bycat/policies-data-lake.md)]
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/cis-azure-1-1-0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-1-0.md
@@ -1,7 +1,7 @@
--- title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark description: Details of the CIS Microsoft Azure Foundations Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
-ms.date: 11/20/2020
+ms.date: 01/08/2021
ms.topic: sample ms.custom: generated ---
@@ -45,8 +45,8 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
-|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
### Ensure that multi-factor authentication is enabled for all non-privileged users
@@ -55,7 +55,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
### Ensure that there are no guest users
@@ -64,9 +64,9 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
-|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
-|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
### Ensure that no custom subscription owner roles are created
@@ -86,7 +86,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Automatic provisioning of the Log Analytics monitoring agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |Enable automatic provisioning of the Log Analytics monitoring agent in order to collect security data |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
+|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
### Ensure ASC Default policy setting "Monitor System Updates" is not "Disabled"
@@ -95,7 +95,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
### Ensure ASC Default policy setting "Monitor OS Vulnerabilities" is not "Disabled"
@@ -104,7 +104,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
### Ensure ASC Default policy setting "Monitor Endpoint Protection" is not "Disabled"
@@ -113,7 +113,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
### Ensure ASC Default policy setting "Monitor Disk Encryption" is not "Disabled"
@@ -131,7 +131,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
+|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
### Ensure ASC Default policy setting "Enable Next Generation Firewall(NGFW) Monitoring" is not "Disabled"
@@ -140,7 +140,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | ### Ensure ASC Default policy setting "Monitor Vulnerability Assessment" is not "Disabled"
@@ -150,7 +150,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Vulnerabilities should be remediated by a Vulnerability Assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F760a85ff-6162-42b3-8d70-698e268f648c) |Monitors vulnerabilities detected by Vulnerability Assessment solution and VMs without a Vulnerability Assessment solution in Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VMVulnerabilities_Audit.json) |
+|[Vulnerabilities should be remediated by a Vulnerability Assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F760a85ff-6162-42b3-8d70-698e268f648c) |Monitors vulnerabilities detected by Vulnerability Assessment solution and VMs without a Vulnerability Assessment solution in Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VMVulnerabilities_Audit.json) |
### Ensure ASC Default policy setting "Monitor JIT Network Access" is not "Disabled"
@@ -159,7 +159,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
### Ensure ASC Default policy setting "Monitor Adaptive Application Whitelisting" is not "Disabled"
@@ -168,7 +168,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
### Ensure ASC Default policy setting "Monitor SQL Auditing" is not "Disabled"
@@ -177,7 +177,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server, except Synapse, and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
+|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
### Ensure ASC Default policy setting "Monitor SQL Encryption" is not "Disabled"
@@ -195,7 +195,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[A security contact email address should be provided for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |Enter an email address to receive notifications when Azure Security Center detects compromised resources |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
### Ensure that security contact 'Phone number' is set
@@ -213,7 +213,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |Enable emailing security alerts to the security contact, in order to have them receive security alert emails from Microsoft. This ensures that the right people are aware of any potential security issues and are able to mitigate the risks |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
### Ensure that 'Send email also to subscription owners' is set to 'On'
@@ -222,7 +222,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |Enable emailing security alerts to the subscription owner, in order to have them receive security alert emails from Microsoft. This ensures that they are aware of any potential security issues and can mitigate the risk in a timely fashion |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
## Storage Accounts
@@ -262,7 +262,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server, except Synapse, and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
+|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
### Ensure that 'AuditActionGroups' in 'auditing' policy for a SQL server is set properly
@@ -280,7 +280,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[SQL servers should be configured with auditing retention days greater than 90 days.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |Audit SQL servers configured with an auditing retention period of less than 90 days. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+|[SQL servers should be configured with 90 days auditing retention or higher.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |SQL servers should be configured with 90 days auditing retention or higher. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
### Ensure that 'Advanced Data Security' on a SQL server is set to 'On'
@@ -317,8 +317,8 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[SQL Managed Instance TDE protector should be encrypted with your own key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Transparent Data Encryption (TDE) with your own key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
-|[SQL server TDE protector should be encrypted with your own key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0d134df8-db83-46fb-ad72-fe0c9428c8dd) |Transparent Data Encryption (TDE) with your own key support provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
+|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
+|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0d134df8-db83-46fb-ad72-fe0c9428c8dd) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
### Ensure 'Enforce SSL connection' is set to 'ENABLED' for MySQL Database Server
@@ -585,7 +585,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
### Ensure that the endpoint protection for all Virtual Machines is installed
@@ -594,7 +594,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
## Other Security Considerations
@@ -605,7 +605,7 @@ This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |---|---|---|---|
-|[Key vault should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization may potentially be able to gain access to delete and purge key vaults. Purge protection protects you fr