Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Customize Application Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md | Title: Tutorial - Customize Azure Active Directory attribute mappings in Application Provisioning -description: Learn what attribute mappings for Software as a Service (SaaS) apps in Azure Active Directory Application Provisioning are how you can modify them to address your business needs. +description: Learn about attribute mappings for Software as a Service (SaaS) apps in Azure Active Directory Application Provisioning. Learn what attributes are and how you can modify them to address your business needs. Before you get started, make sure you're familiar with app management and **sing - [Quickstart Series on App Management in Azure AD](../manage-apps/view-applications-portal.md) - [What is single sign-on (SSO)?](../manage-apps/what-is-single-sign-on.md) -There's a pre-configured set of attributes and attribute-mappings between Azure AD user objects and each SaaS app's user objects. Some apps manage other types of objects along with Users, such as Groups. +There's a preconfigured set of attributes and attribute-mappings between Azure AD user objects and each SaaS app's user objects. Some apps manage other types of objects along with Users, such as Groups. You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings. Follow these steps to access the **Mappings** feature of user provisioning:  -1. Select a **Mappings** configuration to open the related **Attribute Mapping** screen. Some attribute-mappings are required by a SaaS application to function correctly. For required attributes, the **Delete** feature is unavailable. +1. Select a **Mappings** configuration to open the related **Attribute Mapping** screen. SaaS applications require certain attribute-mappings to function correctly. For required attributes, the **Delete** feature is unavailable.  Along with this property, attribute-mappings also support the following attribut - **Source attribute** - The user attribute from the source system (example: Azure Active Directory). - **Target attribute** ΓÇô The user attribute in the target system (example: ServiceNow).-- **Default value if null (optional)** - The value that will be passed to the target system if the source attribute is null. This value will only be provisioned when a user is created. The "default value when null" won't be provisioned when updating an existing user. If for example, you want to provision all existing users in the target system with a particular Job Title (when it's null in the source system), you can use the following [expression](../app-provisioning/functions-for-customizing-application-data.md): Switch(IsPresent([jobTitle]), "DefaultValue", "True", [jobTitle]). Make sure to replace the "Default Value" with what you would like to provision when null in the source system. +- **Default value if null (optional)** - The value that is passed to the target system if the source attribute is null. This value is only provisioned when a user is created. The "default value when null" won't be provisioned when updating an existing user. If for example, you provision all existing users in the target system with a particular Job Title (when it's null in the source system), you'll use the following [expression](../app-provisioning/functions-for-customizing-application-data.md): Switch(IsPresent([jobTitle]), "DefaultValue", "True", [jobTitle]). Make sure to replace the "Default Value" with the value to provision when null in the source system. - **Match objects using this attribute** ΓÇô Whether this mapping should be used to uniquely identify users between the source and target systems. It's typically set on the userPrincipalName or mail attribute in Azure AD, which is typically mapped to a username field in a target application. - **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they're evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated. While you can set as many matching attributes as you would like, consider whether the attributes you're using as matching attributes are truly unique and need to be matching attributes. Generally customers have 1 or 2 matching attributes in their configuration. - **Apply this mapping** The attributes provisioned as part of Group objects can be customized in the sam ## Editing the list of supported attributes -The user attributes supported for a given application are pre-configured. Most application's user management APIs don't support schema discovery. So, the Azure AD provisioning service isn't able to dynamically generate the list of supported attributes by making calls to the application. +The user attributes supported for a given application are preconfigured. Most application's user management APIs don't support schema discovery. So, the Azure AD provisioning service isn't able to dynamically generate the list of supported attributes by making calls to the application. However, some applications support custom attributes, and the Azure AD provisioning service can read and write to custom attributes. To enter their definitions into the Azure portal, select the **Show advanced options** check box at the bottom of the **Attribute Mapping** screen, and then select **Edit attribute list for** your app. When you're editing the list of supported attributes, the following properties a - **Multi-value?** - Whether the attribute supports multiple values. - **Exact case?** - Whether the attributes values are evaluated in a case-sensitive way. - **API Expression** - Don't use, unless instructed to do so by the documentation for a specific provisioning connector (such as Workday).-- **Referenced Object Attribute** - If it's a Reference type attribute, then this menu lets you select the table and attribute in the target application that contains the value associated with the attribute. For example, if you have an attribute named "Department" whose stored value references an object in a separate "Departments" table, you would select "Departments.Name". The reference tables and the primary ID fields supported for a given application are pre-configured and currently can't be edited using the Azure portal, but can be edited using the [Microsoft Graph API](/graph/api/resources/synchronization-configure-with-custom-target-attributes).+- **Referenced Object Attribute** - If it's a Reference type attribute, then this menu lets you select the table and attribute in the target application that contains the value associated with the attribute. For example, if you have an attribute named "Department" whose stored value references an object in a separate "Departments" table, you would select "Departments.Name". The reference tables and the primary ID fields supported for a given application are preconfigured and currently can't be edited using the Azure portal, but can be edited using the [Microsoft Graph API](/graph/api/resources/synchronization-configure-with-custom-target-attributes). #### Provisioning a custom extension attribute to a SCIM compliant application The SCIM RFC defines a core user and group schema, while also allowing for extensions to the schema to meet your application's needs. To add a custom attribute to a SCIM application: |
active-directory | On Premises Scim Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md | The Azure Active Directory (Azure AD) provisioning service supports a [SCIM 2.0] - A computer with at least 3 GB of RAM, to host a provisioning agent. The computer should have Windows Server 2016 or a later version of Windows Server, with connectivity to the target application, and with outbound connectivity to login.microsoftonline.com, other Microsoft Online Services and Azure domains. An example is a Windows Server 2016 virtual machine hosted in Azure IaaS or behind a proxy. ## Deploying Azure AD provisioning agent-The Azure AD Provisioning agent can be deployed on the same server hosting a SCIM enabled application, or a seperate server, providing it has line of sight to the application's SCIM endpoint. A single agent also supports provision to multiple applications hosted locally on the same server or seperate hosts, again as long as each SCIM endpoint is reachable by the agent. +The Azure AD Provisioning agent can be deployed on the same server hosting a SCIM enabled application, or a separate server, providing it has line of sight to the application's SCIM endpoint. A single agent also supports provision to multiple applications hosted locally on the same server or separate hosts, again as long as each SCIM endpoint is reachable by the agent. 1. [Download](https://aka.ms/OnPremProvisioningAgent) the provisioning agent and copy it onto the virtual machine or server that your SCIM application endpoint is hosted on. 2. Run the provisioning agent installer, agree to the terms of service, and select **Install**. |
active-directory | Provision On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md | There are currently a few known limitations to on-demand provisioning. Post your * Restoring a previously soft-deleted user in the target tenant with on-demand provisioning isn't supported. If you try to soft delete a user with on-demand provisioning and then restore the user, it can result in duplicate users. * On-demand provisioning of roles isn't supported. * On-demand provisioning supports disabling users that have been unassigned from the application. However, it doesn't support disabling or deleting users that have been disabled or deleted from Azure AD. Those users won't appear when you search for a user.+* On-demand provisioning does not support nested groups that are not directly assigned to the application. ## Next steps |
active-directory | Concept Authentication Methods Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md | |
active-directory | How To Authentication Methods Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md | Title: How to migrate to the Authentication methods policy (preview) + Title: How to migrate to the Authentication methods policy description: Learn about how to centrally manage multifactor authentication (MFA) and self-service password reset (SSPR) settings in the Authentication methods policy. Previously updated : 01/07/2023 Last updated : 03/22/2023 -# How to migrate MFA and SSPR policy settings to the Authentication methods policy for Azure AD (preview) +# How to migrate MFA and SSPR policy settings to the Authentication methods policy for Azure AD You can migrate Azure Active Directory (Azure AD) [legacy policy settings](concept-authentication-methods-manage.md#legacy-mfa-and-sspr-policies) that separately control multifactor authentication (MFA) and self-service password reset (SSPR) to unified management with the [Authentication methods policy](./concept-authentication-methods-manage.md). |
active-directory | Howto Password Ban Bad On Premises Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md | The following core requirements apply: > [!NOTE] > Some endpoints, such as the CRL endpoint, are not addressed in this article. For a list of all supported endpoints, see [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online).+>In addition, other endpoints are required for Azure portal authentication. For more information, see [Azure portal URLs for proxy bypass](/azure/azure-portal/azure-portal-safelist-urls?tabs=public-cloud#azure-portal-urls-for-proxy-bypass). ### Azure AD Password Protection DC agent To install the Azure AD Password Protection proxy service, complete the followin Registration of the Azure AD Password Protection proxy service is necessary only once in the lifetime of the service. After that, the Azure AD Password Protection proxy service will automatically perform any other necessary maintenance. +1. To make sure that the changes have taken effect, run `Test-AzureADPasswordProtectionDCAgentHealth -TestAll`. For help resolving errors, see [Troubleshoot: On-premises Azure AD Password Protection](howto-password-ban-bad-on-premises-troubleshoot.md). + 1. Now register the on-premises Active Directory forest with the necessary credentials to communicate with Azure by using the `Register-AzureADPasswordProtectionForest` PowerShell cmdlet. > [!NOTE] To install the Azure AD Password Protection proxy service, complete the followin For `Register-AzureADPasswordProtectionForest` to succeed, at least one DC running Windows Server 2012 or later must be available in the Azure AD Password Protection proxy server's domain. The Azure AD Password Protection DC agent software doesn't have to be installed on any domain controllers prior to this step. +1. To make sure that the changes have taken effect, run `Test-AzureADPasswordProtectionDCAgentHealth -TestAll`. For help resolving errors, see [Troubleshoot: On-premises Azure AD Password Protection](howto-password-ban-bad-on-premises-troubleshoot.md). + ### Configure the proxy service to communicate through an HTTP proxy If your environment requires the use of a specific HTTP proxy to communicate with Azure, use the following steps to configure the Azure AD Password Protection service. |
active-directory | Concept Conditional Access Conditions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md | By selecting **Other clients**, you can specify a condition that affects apps th ## Device state (deprecated) -**This preview feature has been deprecated.** Customers should use the **Filter for devices** condition in the Conditional Access policy, to satisfy scenarios previously achieved using device state (preview) condition. +**This preview feature has been deprecated.** Customers should use the **Filter for devices** condition in the Conditional Access policy, to satisfy scenarios previously achieved using device state (deprecated) condition. The device state condition was used to exclude devices that are hybrid Azure AD joined and/or devices marked as compliant with a Microsoft Intune compliance policy from an organization's Conditional Access policies. The above scenario, can be configured using *All users* accessing the *Microsoft ## Filter for devices -ThereΓÇÖs a new optional condition in Conditional Access called filter for devices. When configuring filter for devices as a condition, organizations can choose to include or exclude devices based on a filter using a rule expression on device properties. The rule expression for filter for devices can be authored using rule builder or rule syntax. This experience is similar to the one used for dynamic membership rules for groups. For more information, see the article [Conditional Access: Filter for devices (preview)](concept-condition-filters-for-devices.md). +ThereΓÇÖs a new optional condition in Conditional Access called filter for devices. When configuring filter for devices as a condition, organizations can choose to include or exclude devices based on a filter using a rule expression on device properties. The rule expression for filter for devices can be authored using rule builder or rule syntax. This experience is similar to the one used for dynamic membership rules for groups. For more information, see the article [Conditional Access: Filter for devices](concept-condition-filters-for-devices.md). ## Next steps |
active-directory | Concept Token Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md | Token protection (sometimes referred to as token binding in the industry) attemp Token protection creates a cryptographically secure tie between the token and the device (client secret) it's issued to. Without the client secret, the bound token is useless. When a user registers a Windows 10 or newer device in Azure AD, their primary identity is [bound to the device](../devices/concept-primary-refresh-token.md#how-is-the-prt-protected). This connection means that any issued sign-in token is tied to the device significantly reducing the chance of theft and replay attacks. +> [!IMPORTANT] +> Token protection is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + With this preview, we're giving you the ability to create a Conditional Access policy to require token protection for sign-in tokens for specific services. We support token protection for sign-in tokens in Conditional Access for desktop applications accessing Exchange Online and SharePoint Online on Windows devices. :::image type="content" source="media/concept-token-protection/complete-policy-components-session.png" alt-text="Screenshot showing a Conditional Access policy requiring token protection as the session control"::: |
active-directory | Troubleshoot Required Resource Access Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-required-resource-access-limits.md | -The `RequiredResourceAccess` collection (RRA) on an application object contains all the configured API permissions that an app requires for its default consent request. This collection has various limits depending on which types of identities the app supports, For more information on the limits for supported account types, see [Validation differences by supported account types](supported-accounts-validation.md). +The `RequiredResourceAccess` collection (RRA) on an application object contains all the configured API permissions that an app requires for its default consent request. This collection has various limits depending on which types of identities the app supports. For more information on the limits for supported account types, see [Validation differences by supported account types](supported-accounts-validation.md). The limits on maximum permissions were updated in May 2022, so some apps may have more permissions in their RRA than are now allowed. In addition, apps that change their supported account types after configuring permissions may exceed the limits of the new setting. When apps exceed the configured permissions limit, no new permissions may be added until the number of permissions in the `RequiredResourceAccess` collection is brought back under the limits. If you still need the application or are unsure, the following steps will help y 1. **Remove duplicate permissions.** In some cases, the same permission is listed multiple times. Review the required permissions and remove permissions that are listed two or more times. See the related PowerShell script on the [additional resources](#additional-resources) section of this article. 2. **Remove unused permissions.** Review the permissions required by the application and compare them to what the application or service does. Remove permissions that are configured in the app registration, but which the application or service doesnΓÇÖt require. For more information on how to review permissions, see [Review application permissions](../manage-apps/manage-application-permissions.md) 3. **Remove redundant permissions.** In many APIs, including Microsoft Graph, some permissions aren't necessary when other more privileged permissions are included. For example, the Microsoft Graph permission User.Read.All (read all users) isn't needed when an application also has User.ReadWrite.All (read, create and update all users). To learn more about Microsoft Graph permissions, see [Microsoft Graph permissions reference](/graph/permissions-reference). -4. **Use multiple app registrations.** If a single app or service requires more than 400 permissions in the required permissions list, the app will need to be configured to use two (or more) different app registrations, each one with 400 or fewer permissions configured on the app registration. ## Frequently asked questions (FAQ) process { - Learn about API permissions and the Microsoft identity platform: [Overview of permissions and consent in the Microsoft identity platform](permissions-consent-overview.md) - Understand the permissions available for Microsoft Graph: [Microsoft Graph permissions reference](/graph/permissions-reference)-- Review the limitations to application configurations: [Validation differences by supported account types](supported-accounts-validation.md)+- Review the limitations to application configurations: [Validation differences by supported account types](supported-accounts-validation.md) |
active-directory | Concept Fundamentals Security Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md | Microsoft is making security defaults available to everyone, because managing se Security defaults make it easier to help protect your organization from these identity-related attacks with preconfigured security settings: -- [Requiring all users to register for Azure AD Multi-Factor Authentication](#require-all-users-to-register-for-azure-ad-multi-factor-authentication).+- [Requiring all users to register for Azure AD Multifactor Authentication](#require-all-users-to-register-for-azure-ad-multifactor-authentication). - [Requiring administrators to do multifactor authentication](#require-administrators-to-do-multifactor-authentication). - [Requiring users to do multifactor authentication when necessary](#require-users-to-do-multifactor-authentication-when-necessary). - [Blocking legacy authentication protocols](#block-legacy-authentication-protocols). To enable security defaults in your directory: 1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator, Conditional Access administrator, or global administrator. 1. Browse to **Azure Active Directory** > **Properties**. 1. Select **Manage security defaults**.-1. Set the **Enable security defaults** toggle to **Yes**. +1. Set **Security defaults** to **Enabled **. 1. Select **Save**.  ## Enforced security policies -### Require all users to register for Azure AD Multi-Factor Authentication +### Require all users to register for Azure AD Multifactor Authentication -All users in your tenant must register for multifactor authentication (MFA) in the form of the Azure AD Multi-Factor Authentication. Users have 14 days to register for Azure AD Multi-Factor Authentication by using the [Microsoft Authenticator app](../authentication/concept-authentication-authenticator-app.md) or any app supporting [OATH TOTP](../authentication/concept-authentication-oath-tokens.md). After the 14 days have passed, the user can't sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults. +All users in your tenant must register for multifactor authentication (MFA) in the form of the Azure AD Multifactor Authentication. Users have 14 days to register for Azure AD Multifactor Authentication by using the [Microsoft Authenticator app](../authentication/concept-authentication-authenticator-app.md) or any app supporting [OATH TOTP](../authentication/concept-authentication-oath-tokens.md). After the 14 days have passed, the user can't sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults. ### Require administrators to do multifactor authentication Administrators have increased access to your environment. Because of the power t > [!TIP] > We recommend having separate accounts for administration and standard productivity tasks to significantly reduce the number of times your admins are prompted for MFA. -After registration with Azure AD Multi-Factor Authentication is finished, the following Azure AD administrator roles will be required to do extra authentication every time they sign in: +After registration with Azure AD Multifactor Authentication is finished, the following Azure AD administrator roles will be required to do extra authentication every time they sign in: - Global administrator - Application administrator This policy applies to all users who are accessing Azure Resource Manager servic ### Authentication methods -Security defaults users are required to register for and use Azure AD Multi-Factor Authentication using the [Microsoft Authenticator app using notifications](../authentication/concept-authentication-authenticator-app.md). Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option. Users can also use any third party application using [OATH TOTP](../authentication/concept-authentication-oath-tokens.md) to generate codes. +Security defaults users are required to register for and use Azure AD Multifactor Authentication using the [Microsoft Authenticator app using notifications](../authentication/concept-authentication-authenticator-app.md). Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option. Users can also use any third party application using [OATH TOTP](../authentication/concept-authentication-oath-tokens.md) to generate codes. > [!WARNING] > Do not disable methods for your organization if you are using security defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa). Any [B2B guest](../external-identities/what-is-b2b.md) users or [B2B direct conn ### Disabled MFA status -If your organization is a previous user of per-user based Azure AD Multi-Factor Authentication, don't be alarmed to not see users in an **Enabled** or **Enforced** status if you look at the Multi-Factor Auth status page. **Disabled** is the appropriate status for users who are using security defaults or Conditional Access based Azure AD Multi-Factor Authentication. +If your organization is a previous user of per-user based Azure AD Multifactor Authentication, don't be alarmed to not see users in an **Enabled** or **Enforced** status if you look at the Multi-Factor Auth status page. **Disabled** is the appropriate status for users who are using security defaults or Conditional Access based Azure AD Multifactor Authentication. ### Conditional Access To disable security defaults in your directory: 1. Sign in to the [Azure portal](https://portal.azure.com) as a security administrator, Conditional Access administrator, or global administrator. 1. Browse to **Azure Active Directory** > **Properties**. 1. Select **Manage security defaults**.-1. Set the **Enable security defaults** toggle to **No**. +1. Set **Security defaults** to **Disabled (not recommended)**. 1. Select **Save**. ## Next steps |
active-directory | Understanding Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md | Once scheduling is enabled, the workflow will be evaluated every three hours to [](media/understanding-lifecycle-workflows/workflow-10.png#lightbox) +>[!NOTE] +> For a particular user and workflow version, the scheduled workflow execution is performed only once every 30 days. Also, the execution of on-demand workflows of a particular workflow version in the last 30 days results in the scheduled workflow execution not taking place for a particular user. + To view a detailed guide on scheduling a workflow, see: [Customize the schedule of workflows](customize-workflow-schedule.md). ### On-demand scheduling |
active-directory | Add Application Portal Assign Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-assign-users.md | -It is recommended that you use a non-production environment to test the steps in this quickstart. +It's recommended that you use a nonproduction environment to test the steps in this quickstart. ## Prerequisites To create a user account and assign it to an enterprise application, you need: - An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.+- One of the following roles: Global Administrator, or owner of the service principal. - Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md). ## Create a user account To create a user account and assign it to an enterprise application, you need: To create a user account in your Azure AD tenant: 1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.-1. Browse to **Azure Active Directory** > **Users**. +1. Browse to **Azure Active Directory** and select **Users**. 1. Select **New user** at the top of the pane. :::image type="content" source="media/add-application-portal-assign-users/new-user.png" alt-text="Add a new user account to your Azure AD tenant."::: 1. In the **User name** field, enter the username of the user account. For example, `contosouser1@contoso.com`. Be sure to change `contoso.com` to the name of your tenant domain. 1. In the **Name** field, enter the name of the user of the account. For example, `contosouser1`.-1. Leave **Auto-generate password** selected, and then select **Show password**. Write down the value that's displayed in the Password box. +1. Enter the details required for the user under the **Groups and roles**, **Settings**, and **Job info** sections. 1. Select **Create**. ## Assign a user account to an enterprise application To assign a user account to an enterprise application: -1. In the [Azure portal](https://portal.azure.com), browse to **Azure Active Directory** > **Enterprise applications**, and then search for and select the application to which you want to assign the user account. For example, the application that you created in the previous quickstart named **Azure AD SAML Toolkit 1**. +1. In the [Azure portal](https://portal.azure.com), browse to **Azure Active Directory** and select **Enterprise applications**. +1. Search for and select the application to which you want to assign the user account. For example, the application that you created in the previous quickstart named **Azure AD SAML Toolkit 1**. 1. In the left pane, select **Users and groups**, and then select **Add user/group**. - :::image type="content" source="media/add-application-portal-assign-users/assign-user.png" alt-text="Assign user account to zn application in your Azure AD tenant."::: + :::image type="content" source="media/add-application-portal-assign-users/assign-user.png" alt-text="Assign user account to an application in your Azure AD tenant."::: 1. On the **Add Assignment** pane, select **None Selected** under **Users and groups**. 1. Search for and select the user that you want to assign to the application. For example, `contosouser1@contoso.com`. To assign a user account to an enterprise application: ## Clean up resources -If you are planning to complete the next quickstart, keep the application that you created. Otherwise, you can consider deleting it to clean up your tenant. +If you're planning to complete the next quickstart, keep the application that you created. Otherwise, you can consider deleting it to clean up your tenant. ## Next steps |
active-directory | Add Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal.md | -In this quickstart, you use the Azure portal to add an enterprise application to your Azure Active Directory (Azure AD) tenant. Azure AD has a gallery that contains thousands of enterprise applications that have been pre-integrated. Many of the applications your organization uses are probably already in the gallery. This quickstart uses the application named **Azure AD SAML Toolkit** as an example, but the concepts apply for most [enterprise applications in the gallery](../saas-apps/tutorial-list.md). +In this quickstart, you use the Azure portal to add an enterprise application to your Azure Active Directory (Azure AD) tenant. Azure AD has a gallery that contains thousands of enterprise applications that have been preintegrated. Many of the applications your organization uses are probably already in the gallery. This quickstart uses the application named **Azure AD SAML Toolkit** as an example, but the concepts apply for most [enterprise applications in the gallery](../saas-apps/tutorial-list.md). -It is recommended that you use a non-production environment to test the steps in this quickstart. +It's recommended that you use a nonproduction environment to test the steps in this quickstart. ## Prerequisites To add an enterprise application to your Azure AD tenant, you need: - An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, or Application Administrator.+- One of the following roles: Global Administrator, or Application Administrator. ## Add an enterprise application To add an enterprise application to your tenant: 1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.-1. Browse to **Azure Active Directory** > **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. +1. Browse to **Azure Active Directory** and select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. 1. In the **Enterprise applications** pane, select **New application**. 1. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning. Search for and select the application. In this quickstart, **Azure AD SAML Toolkit** is being used. If you choose to install an application that uses OpenID Connect based SSO, inst ## Clean up resources -If you are planning to complete the next quickstart, keep the enterprise application that you created. Otherwise, you can consider deleting it to clean up your tenant. For more information, see [Delete an application](delete-application-portal.md). +If you're planning to complete the next quickstart, keep the enterprise application that you created. Otherwise, you can consider deleting it to clean up your tenant. For more information, see [Delete an application](delete-application-portal.md). ## Next steps |
active-directory | Manage Application Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md | Each option generates PowerShell scripts that enable you to control user access Use the following Azure AD PowerShell script to revoke all permissions granted to an application. ```powershell-Connect-AzureAD -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All" "AppRoleAssignment.ReadWrite.All", +Connect-AzureAD # Get Service Principal using objectId $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>" $spApplicationPermissions | ForEach-Object { Remove appRoleAssignments for users or groups to the application using the following scripts. ```powershell-Connect-AzureAD -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "AppRoleAssignment.ReadWrite.All" +Connect-AzureAD # Get Service Principal using objectId $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>" |
active-directory | Migrate Adfs Apps To Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md | Many organizations have Software as a Service (SaaS) or custom line-of-business ## Types of apps to migrate -Migrating all your application authentication to Azure AD is optimal, as it gives you a single control plane for identity and access management. +Migrating all your application authentication to Azure AD is recommended, as it gives you a single control plane for identity and access management. -Your applications may use modern or legacy protocols for authentication. When you plan your migration to Azure AD, consider migrating the apps that use modern authentication protocols (such as SAML and Open ID Connect) first. These apps can be reconfigured to authenticate with Azure AD either via a built-in connector from the Azure App Gallery, or by registering the application in Azure AD. Apps that use older protocols can be integrated using Application Proxy. +Your applications may use modern or legacy protocols for authentication. When you plan your migration to Azure AD, consider migrating the apps that use modern authentication protocols (such as SAML and Open ID Connect) first. These apps can be reconfigured to authenticate with Azure AD either via a built-in connector from the Azure App Gallery, or by registering the custom application in Azure AD. Apps that use older protocols can be integrated using [Application Proxy](../app-proxy/what-is-application-proxy.md) or any of our [Secure Hybrid Access (SHA) partners](secure-hybrid-access-integrations.md). For more information, see: For more information, see: ### The migration process -During the process of moving your app authentication to Azure AD, test your apps and configuration. We recommend that you continue to use existing test environments for migration testing when you move to the production environment. If a test environment isn't currently available, you can set one up using [Azure App Service](https://azure.microsoft.com/services/app-service/) or [Azure Virtual Machines](https://azure.microsoft.com/free/virtual-machines/search/?OCID=AID2000128_SEM_lHAVAxZC&MarinID=lHAVAxZC_79233574796345_azure%20virtual%20machines_be_c__1267736956991399_kwd-79233582895903%3Aloc-190&lnkd=Bing_Azure_Brand&msclkid=df6ac75ba7b612854c4299397f6ab5b0&ef_id=XmAptQAAAJXRb3S4%3A20200306231230%3As&dclid=CjkKEQiAhojzBRDg5ZfomsvdiaABEiQABCU7XjfdCUtsl-Abe1RAtAT35kOyI5YKzpxRD6eJS2NM97zw_wcB), depending on the architecture of the application. +During the process of moving your app authentication to Azure AD, test your apps and configuration. We recommend that you continue to use existing test environments for migration testing before you move to the production environment. If a test environment isn't currently available, you can set one up using [Azure App Service](https://azure.microsoft.com/services/app-service/) or [Azure Virtual Machines](https://azure.microsoft.com/free/virtual-machines/search/?OCID=AID2000128_SEM_lHAVAxZC&MarinID=lHAVAxZC_79233574796345_azure%20virtual%20machines_be_c__1267736956991399_kwd-79233582895903%3Aloc-190&lnkd=Bing_Azure_Brand&msclkid=df6ac75ba7b612854c4299397f6ab5b0&ef_id=XmAptQAAAJXRb3S4%3A20200306231230%3As&dclid=CjkKEQiAhojzBRDg5ZfomsvdiaABEiQABCU7XjfdCUtsl-Abe1RAtAT35kOyI5YKzpxRD6eJS2NM97zw_wcB), depending on the architecture of the application. You may choose to set up a separate test Azure AD tenant on which to develop your app configurations. Update the configuration of your production app to point to your production Azur ### Line of business apps -Your line-of-business apps are those that your organization developed or those that are a standard packaged product. Examples include apps built on Windows Identity Foundation and SharePoint apps (not SharePoint Online). +Your line-of-business apps are those that your organization developed or those that are a standard packaged product. -Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Azure portal](https://portal.azure.com/). +Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Entra portal](https://entra.microsoft.com/#home). ## SAML-based single sign-on Many SaaS applications have an [application-specific tutorial](../saas-apps/tuto  -Some apps can be migrated easily. Apps with more complex requirements, such as custom claims, may require additional configuration in Azure AD and/or Azure AD Connect. For information about supported claims mappings, see [How to: Customize claims emitted in tokens for a specific app in a tenant (Preview)](../develop/active-directory-claims-mapping.md). +Some apps can be migrated easily. Apps with more complex requirements, such as custom claims, may require additional configuration in Azure AD and/or [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md). For information about supported claims mappings, see [How to: Customize claims emitted in tokens for a specific app in a tenant (Preview)](../develop/active-directory-claims-mapping.md). Keep in mind the following limitations when mapping attributes: For information about Azure AD SAML token encryption and how to configure it, se > [!NOTE] > Token encryption is an Azure Active Directory (Azure AD) premium feature. To learn more about Azure AD editions, features, and pricing, see [Azure AD pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). +### SAML request signature verification (preview) + +This functionality validates the signature of signed authentication requests. An App Admin enables and disables the enforcement of signed requests and uploads the public keys that should be used to do the validation. For more information, see [How to enfore signed SAML authentication requests](howto-enforce-signed-saml-authentication.md). ++### Custom claims providers (preview) ++To migrate data from legacy systems such as ADFS or data stores such as LDAP your apps will be dependent on certain data in the tokens because of which full migration is difficult. You can use custom claims providers to add claims into the token. For more information, see [Custom claims provider overview](../develop/custom-claims-provider-overview.md). + ### Apps and configurations that can be moved today Apps that you can move easily today include SAML 2.0 apps that use the standard set of configuration elements and claims. These standard items are: Apps that you can move easily today include SAML 2.0 apps that use the standard The following require additional configuration steps to migrate to Azure AD: * Custom authorization or multi-factor authentication (MFA) rules in AD FS. You configure them using the [Azure AD Conditional Access](../conditional-access/overview.md) feature.-* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Azure portal interface. +* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Entra portal interface. * WS-Federation apps such as SharePoint apps that require SAML version 1.1 tokens. You can configure them manually using PowerShell. You can also add a pre-integrated generic template for SharePoint and SAML 1.1 applications from the gallery. We support the SAML 2.0 protocol. * Complex claims issuance transforms rules. For information about supported claims mappings, see: * [Claims mapping in Azure Active Directory](../develop/active-directory-claims-mapping.md). Apps that require the following protocol capabilities can't be migrated today: * Support for the WS-Trust ActAs pattern * SAML artifact resolution-* Signature verification of signed SAML requests -ΓÇÄ - > [!Note] - > Signed requests are accepted, but the signature isn't verified. -- ΓÇÄGiven that Azure AD only returns the token to endpoints preconfigured in the application, signature verification probably isn't required in most cases. --#### Claims in token capabilities --Apps that require the following claims in token capabilities can't be migrated today. --* Claims from attribute stores other than the Azure AD directory, unless that data is synced to Azure AD. For more information, see the [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview). -* Issuance of directory multiple-value attributes. For example, we can't issue a multivalued claim for proxy addresses at this time. ## Map app settings from AD FS to Azure AD Migration requires assessing how the application is configured on-premises, and The following table describes some of the most common mapping of settings between an AD FS Relying Party Trust to Azure AD Enterprise Application: * AD FSΓÇöFind the setting in the AD FS Relying Party Trust for the app. Right-click the relying party and select Properties.-* Azure ADΓÇöThe setting is configured within [Azure portal](https://portal.azure.com/) in each application's SSO properties. +* Azure ADΓÇöThe setting is configured within [Entra portal](https://entra.microsoft.com/#home) in each application's SSO properties. | Configuration setting| AD FS| How to configure in Azure AD| SAML Token | | - | - | - | - | The following table describes some of the most common mapping of settings betwee Configure your applications to point to Azure AD versus AD FS for SSO. Here, we're focusing on SaaS apps that use the SAML protocol. However, this concept extends to custom line-of-business apps as well. > [!NOTE]-> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Azure portal](https://portal.azure.com/) under **Azure Active Directory > Properties**: +> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Entra portal](https://entra.microsoft.com/#home) under **Azure Active Directory > Properties**: * Select Directory ID to see your Tenant ID. * Select Application ID to see your Application ID. SaaS apps need to know where to send authentication requests and how to validate | - | - | - | | **IdP Sign-on URL** <p>Sign-on URL of the IdP from the app's perspective (where the user is redirected for login).| The AD FS sign-on URL is the AD FS federation service name followed by "/adfs/ls/." <p>For example: `https://fs.contoso.com/adfs/ls/`| Replace {tenant-id} with your tenant ID. <p> ΓÇÄFor apps that use the SAML-P protocol: [https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p>ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/{tenant-id}/wsfed](https://login.microsoftonline.com/{tenant-id}/wsfed) | | **IdP sign-out URL**<p>Sign-out URL of the IdP from the app's perspective (where the user is redirected when they choose to sign out of the app).| The sign-out URL is either the same as the sign-on URL, or the same URL with "wa=wsignout1.0" appended. For example: `https://fs.contoso.com/adfs/ls/?wa=wsignout1.0`| Replace {tenant-id} with your tenant ID.<p>For apps that use the SAML-P protocol:<p>[https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p> ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0](https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0) |-| **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Azure portal in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. | +| **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Entra portal in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. | | **Identifier/ "issuer"**<p>Identifier of the IdP from the app's perspective (sometimes called the "issuer ID").<p>ΓÇÄIn the SAML token, the value appears as the Issuer element.| The identifier for AD FS is usually the federation service identifier in AD FS Management under **Service > Edit Federation Service Properties**. For example: `http://fs.contoso.com/adfs/services/trust`| Replace {tenant-id} with your tenant ID.<p>https:\//sts.windows.net/{tenant-id}/ | | **IdP federation metadata**<p>Location of the IdP's publicly available federation metadata. (Some apps use federation metadata as an alternative to the administrator configuring URLs, identifier, and token signing certificate individually.)| Find the AD FS federation metadata URL in AD FS Management under **Service > Endpoints > Metadata > Type: Federation Metadata**. For example: `https://fs.contoso.com/FederationMetadat). | Explicit group authorization in AD FS: To map this rule to Azure AD: -1. In the [Azure portal](https://portal.azure.com/), [create a user group](../fundamentals/active-directory-groups-create-azure-portal.md) that corresponds to the group of users from AD FS. +1. In the [Entra portal](https://entra.microsoft.com/#home), [create a user group](../fundamentals/active-directory-groups-create-azure-portal.md) that corresponds to the group of users from AD FS. 1. Assign app permissions to the group:  Explicit user authorization in AD FS: To map this rule to Azure AD: -* In the [Azure portal](https://portal.azure.com/), add a user to the app through the Add Assignment tab of the app as shown below: +* In the [Entra portal](https://entra.microsoft.com/#home), add a user to the app through the Add Assignment tab of the app as shown below:  The following are examples of types of MFA rules in AD FS, and how you can map t MFA rule settings in AD FS: -  +  #### Example 1: Enforce MFA based on users/groups Emit attributes as Claims rule in AD FS: To map the rule to Azure AD: -1. In the [Azure portal](https://portal.azure.com/), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration: +1. In the [Entra portal](https://entra.microsoft.com/#home), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration:  In this table, we've listed some useful Permit and Except options and how they m | From Devices with Specific Trust Level| Set this from the **Device State** control under Assignments -> Conditions| Use the **Exclude** option under Device State Condition and Include **All devices** | | With Specific Claims in the Request| This setting can't be migrated| This setting can't be migrated | -Here's an example of how to configure the Exclude option for trusted locations in the Azure portal: +Here's an example of how to configure the Exclude option for trusted locations in the Entra portal:  For more information, see [Prerequisites for using Group attributes synchronized ### Set up user self-provisioning -Some SaaS applications support the ability to self-provision users when they first sign in to the application. In Azure AD, app provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need to access. Users that are migrated already have an account in the SaaS application. Any new users added after the migration need to be provisioned. Test [SaaS app provisioning](../app-provisioning/user-provisioning.md) once the application is migrated. +Some SaaS applications support the ability to Just-in-Time (JIT) provision users when they first sign in to the application. In Azure AD, app provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need to access. Users that are migrated already have an account in the SaaS application. Any new users added after the migration need to be provisioned. Test [SaaS app provisioning](../app-provisioning/user-provisioning.md) once the application is migrated. ### Sync external users in Azure AD Your existing external users can be set up in these two ways in AD FS: * **External users with a local account within your organization**ΓÇöYou continue to use these accounts in the same way that your internal user accounts work. These external user accounts have a principle name within your organization, although the account's email may point externally. As you progress with your migration, you can take advantage of the benefits that [Azure AD B2B](../external-identities/what-is-b2b.md) offers by migrating these users to use their own corporate identity when such an identity is available. This streamlines the process of signing in for those users, as they're often signed in with their own corporate logon. Your organization's administration is easier as well, by not having to manage accounts for external users. * **Federated external Identities**ΓÇöIf you are currently federating with an external organization, you have a few approaches to take:- * [Add Azure Active Directory B2B collaboration users in the Azure portal](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to. + * [Add Azure Active Directory B2B collaboration users in the Entra portal](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to. * [Create a self-service B2B sign-up workflow](../external-identities/self-service-portal.md) that generates a request for individual users at your partner organization using the B2B invitation API. No matter how your existing external users are configured, they likely have permissions that are associated with their account, either in group membership or specific permissions. Evaluate whether these permissions need to be migrated or cleaned up. Accounts within your organization that represent an external user need to be disabled once the user has been migrated to an external identity. The migration process should be discussed with your business partners, as there may be an interruption in their ability to connect to your resources. ## Migrate and test your apps -Follow the migration process detailed in this article. Then go to the [Azure portal](https://portal.azure.com/) to test if the migration was a success. +Follow the migration process detailed in this article. Then go to the [Entra portal](https://entra.microsoft.com/#home) to test if the migration was a success. Follow these instructions: |
active-directory | View Applications Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/view-applications-portal.md | -It is recommended that you use a non-production environment to test the steps in this quickstart. +It's recommended that you use a nonproduction environment to test the steps in this quickstart. ## Prerequisites To view applications that have been registered in your Azure AD tenant, you need: - An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.+- One of the following roles: Global Administrator, or owner of the service principal. - Completion of the steps in [Quickstart: Add an enterprise application](add-application-portal.md). ## View a list of applications To view applications that have been registered in your Azure AD tenant, you need To view the enterprise applications registered in your tenant: 1. Go to the [Azure portal](https://portal.azure.com) and sign in using one of the roles listed in the prerequisites.-1. Browse to **Azure Active Directory** > **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. +1. Browse to **Azure Active Directory** and select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. :::image type="content" source="media/view-applications-portal/view-enterprise-applications.png" alt-text="View the registered applications in your Azure AD tenant."::: To search for a particular application: Select options according to what you're looking for: -1. You can view the applications by **Application Type**, **Application Status**, and **Application visibility**. These three options are the default filters. +1. The default filters are **Application Type** and **Application ID starts with**, and **Application visibility**. 1. Under **Application Type**, choose one of these options: - **Enterprise Applications** shows non-Microsoft applications. - **Microsoft Applications** shows Microsoft applications. - **Managed Identities** shows applications that are used to authenticate to services that support Azure AD authentication. - **All Applications** shows both non-Microsoft and Microsoft applications.-1. Under **Application Status**, choose **Any**, **Disabled**, or **Enabled**. The **Any** option includes both disabled and enabled applications. +1. Under **Application ID starts with**, enter the first few digits of the application ID if you know the application ID. 1. Under **Application Visibility**, choose **Any**, or **Hidden**. The **Hidden** option shows applications that are in the tenant, but aren't visible to users. 1. After choosing the options you want, select **Apply**.-1. Select **Add filters** to add more options for filtering the search results. The other options are: - - **Application ID** +1. Select **Add filters** to add more options for filtering the search results. The other options include: + - **Application Visibility** - **Created on** - **Assignment required** - **Is App Proxy** |
active-directory | Groups Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md | -In Privileged Identity Management (PIM) for groups in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define membership or ownership assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, etc. Use the following steps to configure role settings and setup the approval workflow to specify who can approve or deny requests to elevate privilege. +In Privileged Identity Management (PIM) for groups in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define membership or ownership assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, etc. Use the following steps to configure role settings and set up the approval workflow to specify who can approve or deny requests to elevate privilege. You need to have Global Administrator, Privileged Role Administrator, or group Owner permissions to manage settings for membership or ownership assignments of the group. Role settings are defined per role per group: all assignments for the same role (member or owner) for the same group follow same role settings. Role settings of one group are independent from role settings of another group. Role settings for one role (member) are independent from role settings for another role (owner). Use the **Activation maximum duration** slider to set the maximum time, in hours ### On activation, require multi-factor authentication -You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised. +You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication helps safeguard access to data and applications, providing another layer of security by using a second form of authentication. -User may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session. --For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md). +> [!NOTE] +> User may not be prompted for multi-factor authentication if they authenticated with strong credentials, or provided multi-factor authentication earlier in this session. If your goal is to ensure that users have to provide authentication during activation, you can use [On activation, require Azure AD Conditional Access authentication context](pim-how-to-change-default-settings.md#on-activation-require-azure-ad-conditional-access-authentication-context-public-preview) together with [Authentication Strengths](../authentication/concept-authentication-strengths.md) to require users to authenticate during activation using methods different from the one they used to sign-in to the machine. For example, if users sign-in to the machine using Windows Hello for Business, you can use ΓÇ£On activation, require Azure AD Conditional Access authentication contextΓÇ¥ and Authentication Strengths to require users to do Passwordless sign-in with Microsoft Authenticator when they activate the role. After the user provides Passwordless sign-in with Microsoft Authenticator once in this example, they'll be able to do their next activation in this session without additional authentication because Passwordless sign-in with Microsoft Authenticator will already be part of their token. +> +> It's recommended to enable Azure AD Multi-Factor Authentication for all users. For more information, see [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md). ### On activation, require Azure AD Conditional Access authentication context (Public Preview) You can require users who are eligible for a role to satisfy Conditional Access To enforce this requirement, you need to: 1. Create Conditional Access authentication context.+ 1. Configure Conditional Access policy that would enforce requirements for this authentication context.+ > [!NOTE] + > The scope of the Conditional Access policy should include all or eligible users for group membership/ownership. Do not create a Conditional Access policy scoped to authentication context and group at the same time because during activation a user does not have group membership yet, and the Conditional Access policy would not apply. 1. Configure authentication context in PIM settings for the role. :::image type="content" source="media/pim-for-groups/pim-group-21.png" alt-text="Screenshot of the Edit role settings Member page." lightbox="media/pim-for-groups/pim-group-21.png"::: +> [!NOTE] +> If PIM settings have ΓÇ£**On activation, require Azure AD Conditional Access authentication context**ΓÇ¥ configured, Conditional Access policies define what conditions user needs to meet in order to satisfy the access requirements. This means that security principals with permissions to manage Conditional Access policies such as Conditional Access Administrators or Security Administrators may change requirements, remove them, or block eligible users from activating their group membership/ownership. Security principals that can manage Conditional Access policies should be considered highly privileged and protected accordingly. ++> [!NOTE] +> We recommend creating and enabling Conditional Access policy for the authentication context before the authentication context is configured in PIM settings. As a backup protection mechanism, if there are no Conditional Access policies in the tenant that target authentication context configured in PIM settings, during group membership/ownership activation, Azure AD Multi-Factor Authentication is required as the [On activation, require multi-factor authentication](groups-role-settings.md#on-activation-require-multi-factor-authentication) setting would be set. This backup protection mechanism is designed to solely protect from a scenario when PIM settings were updated before the Conditional Access policy is created, due to a configuration mistake. This backup protection mechanism will not be triggered if the Conditional Access policy is turned off, in report-only mode, or has eligible users excluded from the policy. ++> [!NOTE] +> **ΓÇ£On activation, require Azure AD Conditional Access authentication contextΓÇ¥** setting defines authentication context, requirements for which users will need to satisfy when they activate group membership/ownership. After group membership/ownership is activated, this does not prevent users from using another browsing session, device, location, etc. to use group membership/ownership. For example, user may use Intune compliant device to activate group membership/ownership, then after the role is activated, sign-in to the same user account from another device that is not Intune compliant, and use previously activated group ownership/membership from there. To protect from this situation, you may scope Conditional Access policies enforcing certain requirements to eligible users directly. For example, you can require users eligible to certain group membership/ownership to always use Intune compliant devices. + To learn more about Conditional Access authentication context, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context). ### Require justification on activation You can require that users enter a business justification when they activate the ### Require ticket information on activation -You can require that users enter a support ticket when they activate the eligible assignment. This is information only field and correlation with information in any ticketing system is not enforced. +You can require that users enter a support ticket when they activate the eligible assignment. This is information only field and correlation with information in any ticketing system isn't enforced. ### Require approval to activate -You can require approval for activation of eligible assignment. Approver doesnΓÇÖt have to be group member or owner. When using this option, you have to select at least one approver (we recommend to select at least two approvers), there are no default approvers. +You can require approval for activation of eligible assignment. Approver doesnΓÇÖt have to be group member or owner. When using this option, you have to select at least one approver (we recommend selecting at least two approvers), there are no default approvers. To learn more about approvals, see [Approve activation requests for PIM for Groups members and owners (preview)](groups-approval-workflow.md). |
active-directory | Pim How To Change Default Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md | -In Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define role assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, and more. Use the following steps to configure role settings and setup the approval workflow to specify who can approve or deny requests to elevate privilege. +In Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define role assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, and more. Use the following steps to configure role settings and set up the approval workflow to specify who can approve or deny requests to elevate privilege. You need to have Global Administrator or Privileged Role Administrator role to manage PIM role settings for Azure AD Role. Role settings are defined per role: all assignments for the same role follow the same role settings. Role settings of one role are independent from role settings of another role. Use the **Activation maximum duration** slider to set the maximum time, in hours ### On activation, require multi-factor authentication -You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised. +You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication helps safeguard access to data and applications, providing another layer of security by using a second form of authentication. -User may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session. --For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md). +> [!NOTE] +> User may not be prompted for multi-factor authentication if they authenticated with strong credentials, or provided multi-factor authentication earlier in this session. If your goal is to ensure that users have to provide authentication during activation, you can use [On activation, require Azure AD Conditional Access authentication context](pim-how-to-change-default-settings.md#on-activation-require-azure-ad-conditional-access-authentication-context-public-preview) together with [Authentication Strengths](../authentication/concept-authentication-strengths.md) to require users to authenticate during activation using methods different from the one they used to sign-in to the machine. For example, if users sign-in to the machine using Windows Hello for Business, you can use ΓÇ£On activation, require Azure AD Conditional Access authentication contextΓÇ¥ and Authentication Strengths to require users to do Passwordless sign-in with Microsoft Authenticator when they activate the role. After the user provides Passwordless sign-in with Microsoft Authenticator once in this example, they'll be able to do their next activation in this session without additional authentication because Passwordless sign-in with Microsoft Authenticator will already be part of their token. +> +> It's recommended to enable Azure AD Multi-Factor Authentication for all users. For more information, see [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md). ### On activation, require Azure AD Conditional Access authentication context (Public Preview) You can require users who are eligible for a role to satisfy Conditional Access To enforce this requirement, you need to: 1. Create Conditional Access authentication context.+ 1. Configure Conditional Access policy that would enforce requirements for this authentication context.+ > [!NOTE] + > The scope of the Conditional Access policy should include all or eligible users for a role. Do not create a Conditional Access policy scoped to authentication context and directory role at the same time because during activation the user does not have a role yet, and the Conditional Access policy would not apply. See the note at the end of this section about a situation when you may need two Conditional Access policies, one scoped to the authentication context, and another scoped to the role. 1. Configure authentication context in PIM settings for the role. :::image type="content" source="media/pim-how-to-change-default-settings/role-settings-page.png" alt-text="Screenshot of the Edit role setting Attribute Definition Administrator page." lightbox="media/pim-how-to-change-default-settings/role-settings-page.png"::: +> [!NOTE] +> If PIM settings have **ΓÇ£On activation, require Azure AD Conditional Access authentication contextΓÇ¥** configured, the Conditional Access policies define conditions a user needs to meet to satisfy the access requirements. This means that security principals with permissions to manage Conditional Access policies such as Conditional Access Administrators or Security Administrators may change requirements, remove them, or block eligible users from activating the role. Security principals that can manage the Conditional Access policies should be considered highly privileged and protected accordingly. ++> [!NOTE] +> We recommend creating and enabling a Conditional Access policy for the authentication context before authentication context is configured in PIM settings. As a backup protection mechanism, if there are no Conditional Access policies in the tenant that target authentication context configured in PIM settings, during PIM role activation, Azure AD Multi-Factor Authentication is required as the [On activation, require multi-factor authentication](pim-how-to-change-default-settings.md#on-activation-require-multi-factor-authentication) setting would be set. This backup protection mechanism is designed to solely protect from a scenario when PIM settings were updated before the Conditional Access policy is created, due to a configuration mistake. This backup protection mechanism won't be triggered if the Conditional Access policy is turned off, in report-only mode, or has eligible user excluded from the policy. ++> [!NOTE] +> **ΓÇ£On activation, require Azure AD Conditional Access authentication contextΓÇ¥** setting defines authentication context, requirements for which the user will need to satisfy when they activate the role. After the role is activated, this does not prevent users from using another browsing session, device, location, etc. to use permissions. For example, users may use an Intune compliant device to activate the role, then after the role is activated sign-in to the same user account from another device that is not Intune compliant, and use the previously activated role from there. +> To protect from this situation, create two Conditional Access policies: +>1. The first Conditional Access policy targeted to authentication context. It should have ΓÇ£*All users*ΓÇ¥ or eligible users in its scope. This policy will specify requirements the user needs to meet to activate the role. +>1. The second Conditional Access policy targeted to directory roles. This policy will specify requirements users need to meet to sign-in with directory role activated. +> +>Both policies can enforce the same, or different, requirements depending on your needs. +> +>Another option is to scope Conditional Access policies enforcing certain requirements to eligible users directly. For example you can require users eligible for certain roles to always use Intune compliant devices. + To learn more about Conditional Access authentication context, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context). ### Require justification on activation |
active-directory | Pim How To Require Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-require-mfa.md | - Title: MFA or 2FA and Privileged Identity Management -description: Learn how Azure AD Privileged Identity Management (PIM) validates multifactor authentication (MFA). -------- Previously updated : 06/24/2022------# Multifactor authentication and Privileged Identity Management --We recommend that you require multifactor authentication (MFA or 2FA) for all your administrators. Multifactor authentication reduces the risk of an attack using a compromised password. --You can require that users complete a multifactor authentication challenge when they sign in. You can also require that users complete a multifactor authentication challenge when they activate a role in Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. This way, even if the user didn't complete multifactor authentication when they signed in, they'll be asked to do it by Privileged Identity Management. --> [!IMPORTANT] -> Right now, Azure AD Multi-Factor Authentication only works with work or school accounts, not Microsoft personal accounts (usually a personal account that's used to sign in to Microsoft services such as Skype, Xbox, or Outlook.com). Because of this, anyone using a personal account can't be an eligible administrator because they can't use multifactor authentication to activate their roles. If these users need to continue managing workloads using a Microsoft account, elevate them to permanent administrators for now. --## How PIM validates MFA --There are two options for validating multifactor authentication when a user activates a role. --The simplest option is to rely on Azure AD Multi-Factor Authentication for users who are activating a privileged role. To do this, first check that those users are licensed, if necessary, and have registered for Azure AD Multi-Factor Authentication. For more information about how to deploy Azure AD Multi-Factor Authentication, see [Deploy cloud-based Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md). It is recommended, but not required, that you configure Azure AD to enforce multifactor authentication for these users when they sign in. This is because the multifactor authentication checks will be made by Privileged Identity Management itself. --Alternatively, if users authenticate on-premises you can have your identity provider be responsible for multifactor authentication. For example, if you have configured AD Federation Services to require smartcard-based authentication before accessing Azure AD, [Securing cloud resources with Azure AD Multi-Factor Authentication and AD FS](../authentication/howto-mfa-adfs.md) includes instructions for configuring AD FS to send claims to Azure AD. When a user tries to activate a role, Privileged Identity Management will accept that multifactor authentication has already been validated for the user once it receives the appropriate claims. --## Next steps --- [Configure Azure AD role settings in Privileged Identity Management](pim-how-to-change-default-settings.md)-- [Configure Azure resource role settings in Privileged Identity Management](pim-resource-roles-configure-role-settings.md) |
active-directory | Pim Resource Roles Configure Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md | Alert | Severity | Trigger | Recommendation **Too many owners assigned to a resource** | Medium | Too many users have the owner role. | Review the users in the list and reassign some to less privileged roles. **Too many permanent owners assigned to a resource** | Medium | Too many users are permanently assigned to a role. | Review the users in the list and reassign some to require activation for role use. **Duplicate role created** | Medium | Multiple roles have the same criteria. | Use only one of these roles.-**Roles are being assigned outside of Privileged Identity Management (Preview)** | High | A role is managed directly through the Azure IAM resource, or the Azure Resource Manager API. | Review the users in the list and remove them from privileged roles assigned outside of Privilege Identity Management. --> [!NOTE] -> During the public preview of the **Roles are being assigned outside of Privileged Identity Management (Preview)** alert, Microsoft supports only permissions that are assigned at the subscription level. +**Roles are being assigned outside of Privileged Identity Management** | High | A role is managed directly through the Azure IAM resource, or the Azure Resource Manager API. | Review the users in the list and remove them from privileged roles assigned outside of Privilege Identity Management. ### Severity |
active-directory | Pim Resource Roles Configure Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md | -In Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define role assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, and more. Use the following steps to configure role settings and setup the approval workflow to specify who can approve or deny requests to elevate privilege. +In Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define role assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, and more. Use the following steps to configure role settings and set up the approval workflow to specify who can approve or deny requests to elevate privilege. -You need to have Owner or User Access Administrator role to manage PIM role settings for the resource. Role settings are defined per role and per resource: all assignments for the same role follow the same role settings. Role settings of one role are independent from role settings of another role. Role settings of one resource are independent from role settings of another resource, and role settings configured on a higher level, such as "Subscription" for example, are not inherited on a lower level, such as "Resource Group" for example. +You need to have Owner or User Access Administrator role to manage PIM role settings for the resource. Role settings are defined per role and per resource: all assignments for the same role follow the same role settings. Role settings of one role are independent from role settings of another role. Role settings of one resource are independent from role settings of another resource, and role settings configured on a higher level, such as "Subscription" for example, aren't inherited on a lower level, such as "Resource Group" for example. PIM role settings are also known as ΓÇ£PIM PoliciesΓÇ¥. Use the **Activation maximum duration** slider to set the maximum time, in hours ### On activation, require multi-factor authentication -You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised. +You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication helps safeguard access to data and applications, providing another layer of security by using a second form of authentication. -User may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session. --For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md). +> [!NOTE] +> User may not be prompted for multi-factor authentication if they authenticated with strong credentials, or provided multi-factor authentication earlier in this session. If your goal is to ensure that users have to provide authentication during activation, you can use [On activation, require Azure AD Conditional Access authentication context](pim-how-to-change-default-settings.md#on-activation-require-azure-ad-conditional-access-authentication-context-public-preview) together with [Authentication Strengths](../authentication/concept-authentication-strengths.md) to require users to authenticate during activation using methods different from the one they used to sign-in to the machine. For example, if users sign-in to the machine using Windows Hello for Business, you can use ΓÇ£On activation, require Azure AD Conditional Access authentication contextΓÇ¥ and Authentication Strengths to require users to do Passwordless sign-in with Microsoft Authenticator when they activate the role. After the user provides Passwordless sign-in with Microsoft Authenticator once in this example, they'll be able to do their next activation in this session without additional authentication because Passwordless sign-in with Microsoft Authenticator will already be part of their token. +> +> It's recommended to enable Azure AD Multi-Factor Authentication for all users. For more information, see [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md). ### On activation, require Azure AD Conditional Access authentication context (Public Preview) To enforce this requirement, you need to: :::image type="content" source="media/pim-resource-roles-configure-role-settings/resources-role-setting-details.png" alt-text="Screenshot of the Edit role settings Attestation Reader page." lightbox="media/pim-resource-roles-configure-role-settings/resources-role-setting-details.png"::: +> [!NOTE] +> If PIM settings have **ΓÇ£On activation, require Azure AD Conditional Access authentication contextΓÇ¥** configured, the Conditional Access policies define conditions a user needs to meet to satisfy the access requirements. This means that security principals with permissions to manage Conditional Access policies such as Conditional Access Administrators or Security Administrators may change requirements, remove them, or block eligible users from activating the role. Security principals that can manage the Conditional Access policies should be considered highly privileged and protected accordingly. ++> [!NOTE] +> We recommend creating and enabling a Conditional Access policy for the authentication context before the authentication context is configured in PIM settings. As a backup protection mechanism, if there are no Conditional Access policies in the tenant that target authentication context configured in PIM settings, during PIM role activation, Azure AD Multi-Factor Authentication is required as the [On activation, require multi-factor authentication](pim-resource-roles-configure-role-settings.md#on-activation-require-multi-factor-authentication) setting would be set. This backup protection mechanism is designed to solely protect from a scenario when PIM settings were updated before the Conditional Access policy is created, due to a configuration mistake. This backup protection mechanism won't be triggered if the Conditional Access policy is turned off, in report-only mode, or has eligible user excluded from the policy. ++> [!NOTE] +> **ΓÇ£On activation, require Azure AD Conditional Access authentication contextΓÇ¥** setting defines authentication context, requirements for which users will need to satisfy when they activate the role. After the role is activated, this does not prevent user from using another browsing session, device, location, etc. to use permissions. For example, users may use an Intune compliant device to activate the role, then after the role is activated sign-in to the same user account from another device that is not Intune compliant, and use the previously activated role from there. To protect from this situation, you may scope Conditional Access policies enforcing certain requirements to eligible users directly. For example you can require users eligible for certain roles to always use Intune compliant devices. + To learn more about Conditional Access authentication context, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context). ### Require justification on activation |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | Users with this role have global read-only access on security-related feature, i In | Can do | [Microsoft 365 Defender portal](/microsoft-365/security/defender/microsoft-365-defender-portal) | View security-related policies across Microsoft 365 services<br>View security threats and alerts<br>View reports-[Identity Protection](../identity-protection/overview-identity-protection.md) | Read all security reports and settings information for security features<br><ul><li>Anti-spam<li>Encryption<li>Data loss prevention<li>Anti-malware<li>Advanced threat protection<li>Anti-phishing<li>Mail flow rules +[Identity Protection](../identity-protection/overview-identity-protection.md) | View all Identity Protection reports and Overview [Privileged Identity Management](../privileged-identity-management/pim-configure.md) | Has read-only access to all information surfaced in Azure AD Privileged Identity Management: Policies and reports for Azure AD role assignments and security reviews.<br>**Cannot** sign up for Azure AD Privileged Identity Management or make any changes to it. In the Privileged Identity Management portal or via PowerShell, someone in this role can activate additional roles (for example, Global Administrator or Privileged Role Administrator), if the user is eligible for them. [Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center) | View security policies<br>View and investigate security threats<br>View reports [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | View and investigate alerts<br/>When you turn on role-based access control in Microsoft Defender for Endpoint, users with read-only permissions such as the Security Reader role lose access until they are assigned a Microsoft Defender for Endpoint role. |
active-directory | Admin Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md | The Microsoft Entra Verified ID Admin API enables you to manage all aspects of t ## Base URL -The Admin API is server over HTTPS. All URLs referenced in the documentation have the following base: `https://verifiedid.did.msidentity.com`. +The Admin API is server over HTTPS. All URLs referenced in the documentation have the following base: `https://verifiedid.did.msidentity.com`. ## Authentication -The API is protected through Azure Active Directory and uses OAuth2 bearer tokens. The app registration needs to have the API Permission for `Verifiable Credentials Service Admin` and then when acquiring the access token the app should use scope `6a8b4b39-c021-437c-b060-5a14a3fd65f3/full_access`. The access token must be for a user with the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) role. +The API is protected through Azure Active Directory and uses OAuth2 bearer tokens. The access token can be for a user or for an application. ++### User bearer tokens ++The app registration needs to have the API Permission for `Verifiable Credentials Service Admin` and then when acquiring the access token the app should use scope `6a8b4b39-c021-437c-b060-5a14a3fd65f3/full_access`. The access token must be for a user with the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) role. A user with role [global reader](../../active-directory/roles/permissions-reference.md#global-reader) can perform read-only API calls. ++### Application bearer tokens ++The `Verifiable Credentials Service Admin` service supports the following application permissions. ++| Permission | Description | +| - | -- | +| VerifiableCredential.Authority.ReadWrite | Permission to read/write authority object(s) | +| VerifiableCredential.Contract.ReadWrite | Permission to read/write contract object(s) | +| VerifiableCredential.Credential.Search | Permission to search for a credential to revoke | +| VerifiableCredential.Credential.Revoke | Permission to [revoke a previously issued credential](how-to-issuer-revoke.md) | +| VerifiableCredential.Network.Read | Permission to read entries from the [Verified ID Network](vc-network-api.md) | ++The app registration needs to have the API Permission for `Verifiable Credentials Service Admin` and permissions required from the above table. When acquiring the access token, via the [client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md), the app should use scope `6a8b4b39-c021-437c-b060-5a14a3fd65f3/.default`. ## Onboarding Content-type: application/json } ``` -Repeatedly calling this API will result in the exact same return message. +Repeatedly calling this API results in the exact same return message. ## Authorities We support two different didModels. One is `ion` and the other supported method | `recoveryKeys` | string array | URL to the recovery key | | `encryptionKeys` | string array | URL to the encryption key | | `linkedDomainUrls` | string array | Domains linked to this DID |-| `didDocumentStatus` | string | status of the DID, `published` when it's written to ION otherwise it will be `submitted`| +| `didDocumentStatus` | string | status of the DID, `published` when it's written to ION otherwise it is `submitted`| #### Web Content-type: application/json ### Create authority -This call creates a new **private key**, recovery key and update key, stores these in the specified Azure Key Vault and sets the permissions to this Key Vault for the verifiable credential service and a create new **DID** with corresponding DID Document and commits that to the ION network. +This call creates a new **private key**, recovery key and update key, stores these keys in the specified Azure Key Vault and sets the permissions to this Key Vault for the verifiable credential service and a create new **DID** with corresponding DID Document and commits that to the ION network. #### HTTP request Content-type: application/json Accepted ``` -The didDocumentStatus will switch to `submitted` it will take a while before the change is committed to the ION network. +The didDocumentStatus switches to `submitted` it will take a while before the change is committed to the ION network. If you try to submit a change before the operation is completed, you'll get the following error message: Content-type: application/json } ``` -Save this result with the file name did-configuration.json and upload this file to the correct folder and website. If you specify a domain not linked to this DID/DID Document, you'll receive an error: +Save this result with the file name did-configuration.json and upload this file to the correct folder and website. If you specify a domain not linked to this DID/DID Document, you receive an error: ``` HTTP/1.1 400 Bad Request The response contains the following properties |`vc`| vcType array | types for this contract | |`customStatusEndpoint`| [customStatusEndpoint] (#customstatusendpoint-type) (optional) | status endpoint to include in the verifiable credential for this contract | -If the property `customStatusEndpoint` property isn't specified then the `anonymous` status endpoint is used. +If the property `customStatusEndpoint` property isn't specified, then the `anonymous` status endpoint is used. #### attestations type example message: ### Create contract When creating a contract the name has to be unique in the tenant. In case you have created multiple authorities, the contract name has to be unique across all authorities.-The name of the contract will be part of the contract URL which is used in the issuance requests. +The name of the contract will be part of the contract URL, which is used in the issuance requests. #### HTTP request |
active-directory | How To Dnsbind | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md | To verify domain ownership to your DID, you need to have completed the following ## Verify domain ownership and distribute did-configuration.json file -The domain you will verify ownership of to your DID is defined in the organizational settings. +The domain you verify ownership of to your DID is defined in the [overview section](verifiable-credentials-configure-tenant.md#set-up-verified-id). The domain needs to be a domain under your control and it should be in the format `https://www.example.com/`. 1. From the Azure portal, navigate to the VerifiedID page. 1. Select **Setup**, then **Verify domain ownership** and choose **Verify** for the domain -1. Copy or download the `did-configuration.json` file shown in the image below. +1. Copy or download the `did-configuration.json` file.  -1. Host the `did-configuration.json` file at the location specified. Example: `https://www.example.com/.well-known/did-configuration.json` -There can be no additional path in the URL other than the .well-known path name. +1. Host the `did-configuration.json` file at the location specified. Example: If you specified domain `https://www.example.com` the file need to be hosted at this URL `https://www.example.com/.well-known/did-configuration.json`. +There can be no additional path in the URL other than the `.well-known path` name. 1. When the `did-configuration.json` is publicly available at the .well-known/did-configuration.json URL, verify it by pressing the **Refresh verification status** button. There can be no additional path in the URL other than the .well-known path name. ## How can I verify that the verification is working? -The portal verifies that the `did-configuration.json` is reachable over public internet and valid when you click the **Refresh verification status** button. Microsoft Authenticator do not honor http redirects. You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, a bad SSL certificate or the URL not being public. If the `did-configuration.json` file cannot be requested anonymously in a browser or via tools such as `curl`, without warnings or errors, the portal will not be able to complete the **Refresh verification status** step either. +The portal verifies that the `did-configuration.json` is reachable over public internet and valid when you click the **Refresh verification status** button. Microsoft Authenticator does not honor http redirects. You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, a bad SSL certificate or the URL not being public. If the `did-configuration.json` file can't be requested anonymously in a browser or via tools such as `curl`, without warnings or errors, the portal can't complete the **Refresh verification status** step either. >[!NOTE] > If you are experiencing problems refreshing your verification status, you can troubleshoot it via running `curl -Iv https://yourdomain.com/.well-known/did-configuration.json` on an machine with Ubuntu OS. Windows Subsystem for Linux with Ubuntu will work too. If curl fails, refreshing the verification status will not work. It is of high importance that you link your DID to a domain recognizable to the ## How do you update the linked domain on your DID? -If your trust system is Web, then updating your linked domain is not supported. You have to opt-out and re-onboard. If your trust system is ION, you can update the linked domain via redoing the **Verify domain ownership** step. It might take up to two hours for your DID document to be updated in the [ION network](https://identity.foundation/ion) with the new domain information. No other changes to the domain are possible before the changes are published. +If your trust system is Web, then updating your linked domain isn't supported. You have to opt-out and re-onboard. If your trust system is ION, you can update the linked domain via redoing the **Verify domain ownership** step. It might take up to two hours for your DID document to be updated in the [ION network](https://identity.foundation/ion) with the new domain information. No other changes to the domain are possible before the changes are published. ### How do I know when the linked domain update has successfully completed? -If the trust system is ION, once the domain changes are published to ION, the domain section inside the Microsoft Entra Verified ID service will display Published as the status and you should be able to make new changes to the domain. If the trust system is Web, the changes are public as soon as you replace the did-configuration.json file on your web server. +If the trust system is ION, once the domain changes are published to ION, the domain section inside the Microsoft Entra Verified ID service displays Published as the status and you should be able to make new changes to the domain. If the trust system is Web, the changes are public as soon as you replace the did-configuration.json file on your web server. >[!IMPORTANT] > No changes to your domain are possible while publishing is in progress. ## Linked Domain domain made easy for developers -The easiest way for a developer to get a domain to use for linked domain is to use Azure Storage's static website feature. You can't control what the domain name will be, other than it will contain your storage account name as part of it's hostname. +The easiest way for a developer to get a domain to use for linked domain is to use Azure Storage's static website feature. You can't control what the domain name is, other than it contains your storage account name as part of it's hostname. -Follow these steps to quickly set up a domain to use for Linked Domain: +Follow these steps to quickly setup a domain to use for Linked Domain: 1. Create an **Azure Storage account**. During storage account creation, choose StorageV2 (general-purpose v2 account) and Locally redundant storage (LRS). 1. Go to that Storage Account and select **Static website** in the left hand menu and enable static website. If you can't see the **Static website** menu item, you didn't create a **V2** storage account. |
active-directory | How To Use Quickstart Verifiedemployee | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-verifiedemployee.md | When you select + Add credential in the portal, you get the option to launch two  -In the next screen, you enter some of the Display definitions, like logo url, text and background color. Since the credential is a managed credential with directory based claims, rules definitions are predefined. You don't need to enter rule definition details. The credential type will be **VerifiedEmployee** and the claims from the userΓÇÖs profile are pre-set. Select Create to create the credential. +In the next screen, you enter some of the Display definitions, like logo url, text and background color. Since the credential is a managed credential with directory based claims, rules definitions are predefined and can't be changed. You don't need to enter rule definition details. The credential type will be **VerifiedEmployee** and the claims from the userΓÇÖs profile are pre-set. Select Create to create the credential.  ## Claims schema for Verified employee credential -All of the claims in the Verified employee credential come from attributes in the [user's profile](/graph/api/resources/user) in Azure AD for the issuing tenant. All claims, except photo, come from the Microsoft Graph Query [https://graph.microsoft.com/v1.0/me](/graph/api/user-get). The photo claim comes from the value returned from the Microsoft Graph Query [https://graph.microsoft.com/v1.0/me/photo/$value.](/graph/api/profilephoto-get) +All of the claims in the Verified employee credential come from attributes in the [user's profile](/graph/api/resources/user) in Azure AD for the issuing tenant. You can't modify the set of claims. All claims, except photo, come from the Microsoft Graph Query [https://graph.microsoft.com/v1.0/me](/graph/api/user-get). The photo claim comes from the value returned from the Microsoft Graph Query [https://graph.microsoft.com/v1.0/me/photo/$value.](/graph/api/profilephoto-get) | Claim | Directory attribute | Value | |||| |
active-directory | Issuance Request Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md | The payload contains the following properties: | `type` | string | The verifiable credential type. Should match the type as defined in the verifiable credential manifest. For example: `VerifiedCredentialExpert`. For more information, see [Create the verified credential expert card in Azure](verifiable-credentials-configure-issuer.md). | | `manifest` | string| The URL of the verifiable credential manifest document. For more information, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md).| | `claims` | string| Optional. Used for the `ID token hint` flow to include a collection of assertions made about the subject in the verifiable credential. For PIN code flow, it's important that you provide the user's first name and last name. For more information, see [Verifiable credential names](verifiable-credentials-configure-issuer.md#verifiable-credential-names). |-| `pin` | [PIN](#pin-type)| Optional. A PIN number to provide extra security during issuance. For PIN code flow, this property is required. You generate a PIN code, and present it to the user in your app. The user must provide the PIN code that you generated. | +| `pin` | [PIN](#pin-type)| Optional. PIN code can only be used with the [ID token hint](rules-and-display-definitions-model.md#idtokenhintattestation-type) attestation flow. A PIN number to provide extra security during issuance. You generate a PIN code, and present it to the user in your app. The user must provide the PIN code that you generated. | There are currently four claims attestation types that you can send in the payload. Microsoft Entra Verified ID uses four ways to insert claims into a verifiable credential and attest to that information with the issuer's DID. The following are the four types: |
active-directory | Rules And Display Definitions Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/rules-and-display-definitions-model.md | Rules and Display definitions are used to define a credential. You can read more | Property | Type | Description | | -- | -- | -- |-| `attestations`| [idTokenAttestation](#idtokenattestation-type) and/or [idTokenHintAttestation](#idtokenhintattestation-type) and/or [verifiablePresentationAttestation](#verifiablepresentationattestation-type) and/or [selfIssuedAttestation](#selfissuedattestation-type) | +| `attestations`| [idTokenAttestation](#idtokenattestation-type) and/or [idTokenHintAttestation](#idtokenhintattestation-type) and/or [verifiablePresentationAttestation](#verifiablepresentationattestation-type) and/or [selfIssuedAttestation](#selfissuedattestation-type) | defines the attestation flow(s) to be used for gathering claims to issue in the verifiable credential. | | `validityInterval` | number | represents the lifespan of the credential in seconds | | `vc`| [vcType](#vctype-type) | verifiable credential types for this contract | -+The attestation type example in JSON. Notice that `selfIssued` is a single instance while the others are collections. For examples of how to use the attestation type, please the [Sample JSON rules definitions](how-to-use-quickstart-multiple.md#sample-json-rules-definitions) in the How-to guides. + +```json +"attestations": { + "idTokens": [], + "idTokenHints": [], + "presentations": [], + "selfIssued": {} +} +``` ### idTokenAttestation type When you sign in the user from within Authenticator, you can use the returned ID token from the OpenID Connect compatible provider as input. When you want the user to enter information themselves. This type is also called | Property | Type | Description | | -- | -- | -- | |`label`| string | the label of the claim in display |-|`claim`| string | the name of the claim to which the label applies | +|`claim`| string | the name of the claim to which the label applies. For the JWT-VC format, the value needs to have the `vc.credentialSubject.` prefix. | |`type`| string | the type of the claim | |`description` | string (optional) | the description of the claim | |
active-directory | Services Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/services-partners.md | If you're a Services Partner and would like to be considered into Entra Verified | Services partner | Website | |:-|:--|-|  | [Secure Personally Identifiable Information | AffinitiQuest](https://affinitiquest.io/) | |  | [Avanade Entra Verified ID Consulting Services](https://appsource.microsoft.com/marketplace/consulting-services/avanadeinc.ava_entra_verified_id_fy23?exp=ubp8) | |  | [Credivera: Digital Identity Solutions | Verifiable Credentials](https://www.credivera.com/) | |  | [Decentralized Identity | Condatis](https://condatis.com/technology/decentralized-identity/) | |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md | +## March 2023 ++- Admin API now supports [application access tokens](admin-api.md#authentication) and in addition to user bearer tokens. +- Introducing the Entra Verified ID [Services partner gallery](services-partners.md) listing trusted partners that can help accelerate your Entra Verified ID implementation. +- Improvements to our Administrator onboarding experience in the [Admin portal](verifiable-credentials-configure-tenant.md#register-decentralized-id-and-verify-domain-ownership) based on customer feedback. +- Updates to our samples in [github](https://github.com/Azure-Samples/active-directory-verifiable-credentials) showcasing how to dynamically display VC claims. ++## February 2023 ++- *Public preview* - Entitlement Management customers can now create access packages that leverage Entra Verified ID [learn more](../../active-directory/governance/entitlement-management-verified-id-settings.md) ++- The Request Service API can now do revocation check for verifiable credentials presented that was issued with [StatusList2021](https://w3c.github.io/vc-status-list-2021/) or the [RevocationList2020](https://w3c-ccg.github.io/vc-status-rl-2020/) status list types. ++## January 2023 ++- Microsoft Authenticator user experience improvements on pin code, verifiable credential overview and verifiable credentials requirements. ++## November 2022 ++- Entra Verified ID now reports events in the [Azure AD Audit Log](../../active-directory/reports-monitoring/concept-audit-logs.md). Only management changes made via the Admin API are currently logged. Issuance or presentations of verifiable credentials aren't reported in the audit log. The log entries have a service name of `Verified ID` and the activity will be `Create authority`, `Update contract`, etc. + ## September 2022 - The Request Service API now have [granular app permissions](verifiable-credentials-configure-tenant.md?#grant-permissions-to-get-access-tokens) and you can grant **VerifiableCredential.Create.IssueRequest** and **VerifiableCredential.Create.PresentRequest** separately to segregate duties of issuance and presentation to separate application. This article lists the latest features, improvements, and changes in the Microso Microsoft Entra Verified ID is now generally available (GA) as the new member of the Microsoft Entra portfolio! [read more](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-verified-id-now-generally-available/ba-p/3295506) -### Known issues -- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential will get a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by August 20, 2022.+### Known issues ++- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential gets a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by August 20, 2022. ## July 2022 -- The Request Service APIs have a **new hostname** `verifiedid.did.msidentity.com`. The `beta.did.msidentity` and the `beta.eu.did.msidentity` will continue to work, but you should change your application and configuration. Also, you no longer need to specify `.eu.` for an EU tenant.-- The Request Service APIs have **new endpoints** and **updated JSON payloads**. For issuance, see [Issuance API specification](issuance-request-api.md#issuance-request-payload) and for presentation, see [Presentation API specification](presentation-request-api.md#presentation-request-payload). The old endpoints and JSON payloads will continue to work, but you should change your applications to use the new endpoints and payloads.+- The Request Service APIs have a **new hostname** `verifiedid.did.msidentity.com`. The `beta.did.msidentity` and the `beta.eu.did.msidentity` continue to work, but you should change your application and configuration. Also, you no longer need to specify `.eu.` for an EU tenant. +- The Request Service APIs have **new endpoints** and **updated JSON payloads**. For issuance, see [Issuance API specification](issuance-request-api.md#issuance-request-payload) and for presentation, see [Presentation API specification](presentation-request-api.md#presentation-request-payload). The old endpoints and JSON payloads continue to work, but you should change your applications to use the new endpoints and payloads. - Request Service API **[Error codes](error-codes.md)** have been **updated** - The **[Admin API](admin-api.md)** is made **public** and is documented. The Azure portal is using the Admin API and with this REST API you can automate the onboarding or your tenant and creation of credential contracts. - Find issuers and credentials to verify via the [The Microsoft Entra Verified ID Network](how-use-vcnetwork.md). - For migrating your Azure Storage based credentials to become Managed Credentials there's a PowerShell script in the [GitHub samples repo](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/contractmigration/scripts/contractmigration) for the task. - We also made the following updates to our Plan and design docs:- - (updated) [architecture planning overview](introduction-to-verifiable-credentials-architecture.md). - - (updated) [Plan your issuance solution](plan-issuance-solution.md). - - (updated) [Plan your verification solution](plan-verification-solution.md). + - (updated) [architecture planning overview](introduction-to-verifiable-credentials-architecture.md). + - (updated) [Plan your issuance solution](plan-issuance-solution.md). + - (updated) [Plan your verification solution](plan-verification-solution.md). ## June 2022 -- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you'll need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).-- We are rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform:+- We're adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you'll need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service). +- We're rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform: - Introducing Managed Credentials, which are verifiable credentials that no longer use Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions. - Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md).- - Administrators can create a Verified Employee Managed Credential using the [new quick start](how-to-use-quickstart-verifiedemployee.md). The Verified Employee is a verifiable credential of type verifiedEmployee that is based on a pre-defined set of claims from your tenant's Azure Active Directory. + - Administrators can create a Verified Employee Managed Credential using the [new quick start](how-to-use-quickstart-verifiedemployee.md). The Verified Employee is a verifiable credential of type verifiedEmployee that is based on a predefined set of claims from your tenant's Azure Active Directory. >[!IMPORTANT] > You need to migrate your Azure Storage based credentials to become Managed Credentials. We'll soon provide migration instructions. - We made the following updates to our docs:- - (new) [Current supported open standards for Microsoft Entra Verified ID](verifiable-credentials-standards.md). - - (new) [How to create verifiable credentials for ID token hint](how-to-use-quickstart.md). - - (new) [How to create verifiable credentials for ID token](how-to-use-quickstart-idtoken.md). - - (new) [How to create verifiable credentials for self-asserted claims](how-to-use-quickstart-selfissued.md). - - (new) [Rules and Display definition model specification](rules-and-display-definitions-model.md). - - (new) [Creating an Azure AD tenant for development](how-to-create-a-free-developer-account.md). + - (new) [Current supported open standards for Microsoft Entra Verified ID](verifiable-credentials-standards.md). + - (new) [How to create verifiable credentials for ID token hint](how-to-use-quickstart.md). + - (new) [How to create verifiable credentials for ID token](how-to-use-quickstart-idtoken.md). + - (new) [How to create verifiable credentials for self-asserted claims](how-to-use-quickstart-selfissued.md). + - (new) [Rules and Display definition model specification](rules-and-display-definitions-model.md). + - (new) [Creating an Azure AD tenant for development](how-to-create-a-free-developer-account.md). ## May 2022 -We are expanding our service to all Azure AD customers! Verifiable credentials are now available to everyone with an Azure AD subscription (Free and Premium). Existing tenants that configured the Verifiable Credentials service prior to May 4, 2022 must make a small change to avoid service disruptions. +We're expanding our service to all Azure AD customers! Verifiable credentials are now available to everyone with an Azure AD subscription (Free and Premium). Existing tenants that configured the Verifiable Credentials service prior to May 4, 2022 must make a small change to avoid service disruptions. ## April 2022 -Starting next month, we are rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions. +Starting next month, we're rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions. >[!IMPORTANT] > If changes are not applied before **May 4, 2022**, you will experience errors on issuance and presentation for your application or service using the Microsoft Entra Verified ID Service. Starting next month, we are rolling out exciting changes to the subscription req ## February 2022 -We are rolling out some breaking changes to our service. These updates require Microsoft Entra Verified ID service reconfiguration. End-users need to have their verifiable credentials reissued. +We're rolling out some breaking changes to our service. These updates require Microsoft Entra Verified ID service reconfiguration. End-users need to have their verifiable credentials reissued. - The Microsoft Entra Verified ID service can now store and handle data processing in the Azure European region. - Microsoft Entra Verified ID customers can take advantage of enhancements to credential revocation. These changes add a higher degree of privacy through the implementation of the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. [More information](whats-new.md?#credential-revocation-with-enhanced-privacy) Since the beginning of the Microsoft Entra Verified ID service public preview, t Take the following steps to configure the Verifiable Credentials service in Europe: 1. [Check the location](verifiable-credentials-faq.md#how-can-i-check-my-azure-ad-tenants-region) of your Azure Active Directory to make sure is in Europe.-1. [Reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in your tenant. +1. [Reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in your tenant. >[!IMPORTANT] > On March 31st, 2022 European tenants that have not been [reconfigured](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in Europe will lose access to any previous configuration and will require to configure a new instance of the Azure AD Verifiable Credential service. To confirm which endpoint you should use, we recommend checking your Azure AD te The Azure AD Verifiable Credential service supports the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. Each Issuer tenant now has an Identity Hub endpoint used by verifiers to check on the status of a credential using a privacy-respecting mechanism. The identity hub endpoint for the tenant is also published in the DID document. This feature replaces the current status endpoint. -To uptake this feature follow the next steps: +To uptake this feature, follow the next steps: 1. [Check if your tenant has the Hub endpoint](verifiable-credentials-faq.md#how-can-i-check-if-my-tenant-has-the-new-hub-endpoint). 1. If so, go to the next step. To uptake this feature follow the next steps: Sample contract file: - ``` json + ``` json { "attestations": { "idTokens": [ Sample contract file: } ``` -3. You have to issue new verifiable credentials using your new configuration. All verifiable credentials previously issued continue to exist. Your previous DID remains resolvable however, they use the previous status endpoint implementation. +1. You have to issue new verifiable credentials using your new configuration. All verifiable credentials previously issued continue to exist. Your previous DID remains resolvable however, they use the previous status endpoint implementation. >[!IMPORTANT] > You have to reconfigure your Azure AD Verifiable Credential service instance to create your new Identity hub endpoint. You have until March 31st 2022, to schedule and manage the reconfiguration of your deployment. On March 31st, 2022 deployments that have not been reconfigured will lose access to any previous Microsoft Entra Verified ID service configuration. Administrators will need to set up a new service instance. ### Microsoft Authenticator DID Generation Update -We are making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator will be used of every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator must get their verifiable credentials reissued as any previous credentials aren't going to continue working. +We're making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator will be used of every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator must get their verifiable credentials reissued as any previous credentials aren't going to continue working. ## December 2021 |
aks | Cluster Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md | By using `containerd` for AKS nodes, pod startup latency improves and node resou ### `Containerd` limitations/differences -* For `containerd`, we recommend using [`crictl`](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl) as a replacement CLI instead of the Docker CLI for **troubleshooting** pods, containers, and container images on Kubernetes nodes (for example, `crictl ps`). +* For `containerd`, we recommend using [`crictl`](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl) as a replacement CLI instead of the Docker CLI for **troubleshooting** pods, containers, and container images on Kubernetes nodes. For more information on `crictl`, see [General usage][general-usage] and [Client configuration options][client-config-options]. * `Containerd` doesn't provide the complete functionality of the docker CLI. It's available for troubleshooting only.- * `crictl` offers a more kubernetes-friendly view of containers, with concepts like pods, etc. being present. + * `crictl` offers a more Kubernetes-friendly view of containers, with concepts like pods, etc. being present. * `Containerd` sets up logging using the standardized `cri` logging format (which is different from what you currently get from docker's json driver). Your logging solution needs to support the `cri` logging format (like [Azure Monitor for Containers](../azure-monitor/containers/container-insights-enable-new-cluster.md)) * You can no longer access the docker engine, `/var/run/docker.sock`, or use Docker-in-Docker (DinD). az aks update -n aks -g myResourceGroup --disable-node-restriction <!-- LINKS - external --> [aks-release-notes]: https://github.com/Azure/AKS/releases [azurerm-mariner]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster_node_pool#os_sku+[general-usage]: https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/#general-usage +[client-config-options]: https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md#client-configuration-options <!-- LINKS - internal --> [azure-cli-install]: /cli/azure/install-azure-cli |
aks | Manage Abort Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md | Title: Abort an Azure Kubernetes Service (AKS) long running operation (preview) description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level. Previously updated : 11/23/2022 Last updated : 3/23/2023 In the response, an HTTP status code of 204 is returned. The provisioning state on the managed cluster or agent pool should be **Canceled**. Use the REST API [Get Managed Clusters](/rest/api/aks/managed-clusters/get) or [Get Agent Pools](/rest/api/aks/agent-pools/get) to verify the operation. The provisioning state should update to **Canceled** within a few seconds of the abort request being accepted. Operation status of last running operation ID on the managed cluster/agent pool, which can be retrieved by performing a GET operation against the Managed Cluster or agent pool, should show a status of **Canceling**. +When you terminate an operation, it doesn't roll back to the previous state and it stops at whatever step in the operation was in-process. Once complete, the cluster provisioning state shows a **Canceled** state. If the operation happens to be a cluster upgrade, during a cancel operation it stops where it is. + ## Next steps Learn more about [Container insights](../azure-monitor/containers/container-insights-overview.md) to understand how it helps you monitor the performance and health of your Kubernetes cluster and container workloads. |
aks | Web App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md | spec: ### Create the ingress -The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this activates the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command. +The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this activates the add-on. The `kubernetes.azure.com/use-osm-mtls: "true"` annotation on the Ingress object creates an Open Service Mesh (OSM) [IngressBackend](https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api) to configure a backend service to accept ingress traffic from trusted sources. OSM issues a certificate that Nginx will use as the client certificate to proxy HTTPS connections to TLS backends. The client certificate and CA certificate are stored in a Kubernetes secret that Nginx will use to authenticate service mesh backends. For more information, see [Open Service Mesh: Ingress with Kubernetes Nginx Ingress Controller](https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/). To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command. ```azurecli-interactive az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv spec: secretName: keyvault-aks-helloworld ``` -### Create the ingress backend --Open Service Mesh (OSM) uses its [IngressBackend API](https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api) to configure a backend service to accept ingress traffic from trusted sources. To proxy connections to HTTPS backends, you configure the Ingress and IngressBackend configurations to use https as the backend protocol. OSM issues a certificate that Nginx will use as the client certificate to proxy HTTPS connections to TLS backends. The client certificate and CA certificate are stored in a Kubernetes secret that Nginx will use to authenticate service mesh backends. For more information, see [Open Service Mesh: Ingress with Kubernetes Nginx Ingress Controller](https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/). --Create a file named **ingressbackend.yaml** and copy in the following YAML. --```yaml -apiVersion: policy.openservicemesh.io/v1alpha1 -kind: IngressBackend -metadata: - name: aks-helloworld - namespace: hello-web-app-routing -spec: - backends: - - name: aks-helloworld - port: - number: 80 - protocol: https - tls: - skipClientCertValidation: false - sources: - - kind: Service - name: nginx - namespace: app-routing-system - - kind: AuthenticatedPrincipal - name: ingress-nginx.ingress.cluster.local -``` - ### Create the resources on the cluster Use the [kubectl apply][kubectl-apply] command to create the resources. |
aks | Workload Identity Deploy Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md | Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview). Previously updated : 01/11/2023 Last updated : 03/14/2023 # Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster Create an AKS cluster using the [az aks create][az-aks-create] command with the ```azurecli-interactive az group create --name myResourceGroup --location eastus -az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys +az aks create -g myResourceGroup -n myAKSCluster --enable-oidc-issuer --enable-workload-identity ``` After a few minutes, the command completes and returns JSON-formatted information about the cluster. To get the OIDC Issuer URL and save it to an environmental variable, run the fol export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)" ``` -## Create a managed identity and grant permissions to access Azure Key Vault +## Create a managed identity -This step is necessary if you need to access secrets, keys, and certificates that are mounted in Azure Key Vault from a pod. Perform the following steps to configure access with a managed identity. These steps assume you have an Azure Key Vault already created and configured in your subscription. If you don't have one, see [Create an Azure Key Vault using the Azure CLI][create-key-vault-azure-cli]. --Before proceeding, you need the following information: --* Name of the Key Vault -* Resource group holding the Key Vault +Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity. -You can retrieve this information using the Azure CLI command: [az keyvault list][az-keyvault-list]. --1. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity. -- ```azurecli - export SUBSCRIPTION_ID="$(az account show --query id --output tsv)" - export USER_ASSIGNED_IDENTITY_NAME="myIdentity" - export RG_NAME="myResourceGroup" - export LOCATION="eastus" -- az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --location "${LOCATION}" --subscription "${SUBSCRIPTION_ID}" - ``` --2. Set an access policy for the managed identity to access secrets in your Key Vault by running the following commands: -- ```azurecli - export RG_NAME="myResourceGroup" - export USER_ASSIGNED_IDENTITY_NAME="myIdentity" - export KEYVAULT_NAME="myKeyVault" - export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RG_NAME}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)" +```azurecli +export SUBSCRIPTION_ID="$(az account show --query id --output tsv)" +export USER_ASSIGNED_IDENTITY_NAME="myIdentity" +export RG_NAME="myResourceGroup" +export LOCATION="eastus" - az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}" - ``` +az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --location "${LOCATION}" --subscription "${SUBSCRIPTION_ID}" +``` ## Create Kubernetes service account az identity federated-credential create --name myfederatedIdentity --identity-na kubectl apply -f <your application> ``` +## Optional - Grant permissions to access Azure Key Vault ++This step is necessary if you need to access secrets, keys, and certificates that are mounted in Azure Key Vault from a pod. Perform the following steps to configure access with a managed identity. These steps assume you have an Azure Key Vault already created and configured in your subscription. If you don't have one, see [Create an Azure Key Vault using the Azure CLI][create-key-vault-azure-cli]. ++Before proceeding, you need the following information: ++* Name of the Key Vault +* Resource group holding the Key Vault ++You can retrieve this information using the Azure CLI command: [az keyvault list][az-keyvault-list]. ++1. Set an access policy for the managed identity to access secrets in your Key Vault by running the following commands: ++ ```azurecli + export RG_NAME="myResourceGroup" + export USER_ASSIGNED_IDENTITY_NAME="myIdentity" + export KEYVAULT_NAME="myKeyVault" + export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RG_NAME}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)" ++ az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}" + ``` + ## Disable workload identity To disable the Azure AD workload identity on the AKS cluster where it's been enabled and configured, you can run the following command: |
aks | Workload Identity Migrate From Pod Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md | Title: Modernize your Azure Kubernetes Service (AKS) application to use workload identity (preview) + Title: Migrate your Azure Kubernetes Service (AKS) pod to use workload identity (preview) description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity. Previously updated : 02/08/2023 Last updated : 03/14/2023 -# Modernize application authentication with workload identity (preview) +# Migrate from pod managed-identity to workload identity (preview) -This article focuses on pod-managed identity migration to Azure Active Directory (Azure AD) workload identity (preview) for your Azure Kubernetes Service (AKS) cluster. It also provides guidance depending on the version of the [Azure Identity][azure-identity-supported-versions] client library used by your container-based application. +This article focuses on migrating from a pod-managed identity to Azure Active Directory (Azure AD) workload identity (preview) for your Azure Kubernetes Service (AKS) cluster. It also provides guidance depending on the version of the [Azure Identity][azure-identity-supported-versions] client library used by your container-based application. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] For either scenario, you need to have the federated trust set up before you upda If your cluster is already using the latest version of the Azure Identity SDK, perform the following steps to complete the authentication configuration: -- Deploy workload identity in parallel to where the trust is setup. You can restart your application deployment to begin using the workload identity, where it injects the OIDC annotations into the application automatically.+- Deploy workload identity in parallel with pod-managed identity. You can restart your application deployment to begin using the workload identity, where it injects the OIDC annotations into the application automatically. - After verifying the application is able to authenticate successfully, you can [remove the pod-managed identity](#remove-pod-managed-identity) annotations from your application and then remove the pod-managed identity add-on. -## Migrate from older version +### Migrate from older version If your cluster isn't using the latest version of the Azure Identity SDK, you have two options: If you don't have a managed identity created and assigned to your pod, perform t export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "resourceGroupName" --name "userAssignedIdentityName" --query 'clientId' -otsv)" ``` -2. Grant the managed identity the permissions required to access the resources in Azure it requires. +2. Grant the managed identity the permissions required to access the resources in Azure it requires. For information on how to do this, see [Assign a managed identity access to a resource][assign-rbac-managed-identity]. 3. To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default values for the cluster name and the resource group name. This article showed you how to set up your pod to authenticate using a workload [azure-identity-libraries]: ../active-directory/develop/reference-v2-libraries.md [openid-connect-overview]: ../active-directory/develop/v2-protocols-oidc.md [install-azure-cli]: /cli/azure/install-azure-cli+[assign-rbac-managed-identity]: ../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md <!-- EXTERNAL LINKS --> [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe |
aks | Workload Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md | Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 01/06/2023 Last updated : 03/14/2023 |
app-service | App Service Web Tutorial Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md | The DNS record type you need to add with your domain provider depends on the dom [!INCLUDE [Access DNS records with domain provider](../../includes/app-service-web-access-dns-records-no-h.md)] -Select the type of record to create and follow the instructions. You can use either a [CNAME record](https://en.wikipedia.org/wiki/CNAME_record) or an [A record](https://en.wikipedia.org/wiki/List_of_DNS_record_types#A) to map a custom DNS name to App Service. +Select the type of record to create and follow the instructions. You can use either a [CNAME record](https://en.wikipedia.org/wiki/CNAME_record) or an [A record](https://en.wikipedia.org/wiki/List_of_DNS_record_types#A) to map a custom DNS name to App Service. When your function app is hosted in a [Consumption plan](../azure-functions/consumption-plan.md), only the CNAME option is supported. ### [Root domain (e.g. contoso.com)](#tab/root) |
app-service | Configure Language Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md | zone_pivot_groups: app-service-platform-windows-linux # Configure a PHP app for Azure App Service -This guide shows you how to configure your PHP web apps, mobile back ends, and API apps in Azure App Service. +## Show PHP version -This guide provides key concepts and instructions for PHP developers who deploy apps to App Service. If you've never used Azure App Service, follow the [PHP quickstart](quickstart-php.md) and [PHP with MySQL tutorial](tutorial-php-mysql-app.md) first. -## Show PHP version +This guide shows you how to configure your PHP web apps, mobile back ends, and API apps in Azure App Service. +This guide provides key concepts and instructions for PHP developers who deploy apps to App Service. If you've never used Azure App Service, follow the [PHP quickstart](quickstart-php.md) and [PHP with MySQL tutorial](tutorial-php-mysql-app.md) first. To show the current PHP version, run the following command in the [Cloud Shell](https://shell.azure.com): az webapp list-runtimes --os windows | grep PHP ::: zone pivot="platform-linux" +This guide shows you how to configure your PHP web apps, mobile back ends, and API apps in Azure App Service. ++This guide provides key concepts and instructions for PHP developers who deploy apps to App Service. If you've never used Azure App Service, follow the [PHP quickstart](quickstart-php.md) and [PHP with MySQL tutorial](tutorial-php-mysql-app.md) first. + To show the current PHP version, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive az webapp list-runtimes --os linux | grep PHP ::: zone pivot="platform-windows" -Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 7.4: +Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 8.1: ```azurecli-interactive-az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 7.4 +az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 8.1 ``` ::: zone-end ::: zone pivot="platform-linux" -Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 8.0: +Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 8.1: ```azurecli-interactive-az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PHP|8.0" +az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PHP|8.1" ``` ::: zone-end |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | DBUSER=<db-user-name> DBPASS=<db-password> ``` +Create a SECRET_KEY value for your app by running the following command at a terminal prompt: `python -c 'import secrets; print(secrets.token_hex())'`. ++Set the returned value as the value of `SECRET_KEY` in the .env file. ++``` +SECRET_KEY=<secret-key> +``` + Create a virtual environment for the app: [!INCLUDE [Virtual environment setup](<./includes/quickstart-python/virtual-environment-setup.md>)] Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps 1. *Region* → Any Azure region near you. 1. *Name* → **msdocs-python-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. 1. *Runtime stack* → **Python 3.10**.- 1. *Database* → **PostgreSQL - Flexible Server** is selected by default as the database engine. The server name and database name is also set by default to appropriate values. + 1. *Database* → **PostgreSQL - Flexible Server** is selected by default as the database engine. The server name and database name are also set by default to appropriate values. 1. *Hosting plan* → **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later. 1. Select **Review + create**. 1. After validation completes, select **Create**. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps ## 2. Verify connection settings -The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings). +The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings). App settings are one way to keep connection secrets out of your code repository. When you're ready to move your secrets to a more secure location, here's an [article on storing in Azure Key Vault](../key-vault/certificates/quick-create-python.md). :::row::: :::column span="2"::: The creation wizard generated the connectivity variables for you already as [app :::row::: :::column span="2"::: **Step 2.** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. That will be injected into the runtime environment as an environment variable.- App settings are one way to keep connection secrets out of your code repository. - When you're ready to move your secrets to a more secure location, - here's an [article on storing in Azure Key Vault](../key-vault/certificates/quick-create-python.md). :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png"::: :::column-end::: :::row-end:::+ :::column span="2"::: + **Step 3.** In a terminal or command prompt, run the following Python script to generate a unique secret: `python -c 'import secrets; print(secrets.token_hex())'`. Copy the output value to use in the next step. + :::column-end::: + :::column::: + :::column-end::: + :::column span="2"::: + **Step 4.** In the **Application settings** tab of the **Configuration** page, select **New application setting**. Name the setting `SECRET_KEY`. Paste the value from the previous value. Select **OK**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png" alt-text="A screenshot showing how to set the SECRET_KEY app setting in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png"::: + :::column-end::: + :::column span="2"::: + **Step 5.** Select **Save**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png" alt-text="A screenshot showing how to save the SECRET_KEY app setting in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png"::: + :::column-end::: + Having issues? Check the [Troubleshooting guide](configure-language-python.md#troubleshooting). The `azd up` command cloned the sample app project template to your machine. The * **Source code**: The code and assets for a Flask or Django web app that can be used for local development or deployed to Azure. * **Bicep files**: Infrastructure as code (IaC) files that are used by `azd` to create the necessary resources in Azure.-* **Configuration files**: Essential configuration files such as `azure.yaml` that are used by `azd` to provision, deploy and wire resources together to produce a fully-fledged application. +* **Configuration files**: Essential configuration files such as `azure.yaml` that are used by `azd` to provision, deploy and wire resources together to produce a fully fledged application. ### 2. Provisioned the Azure resources The `azd up` command created all of the resources for the sample application in * **Azure App Service plan**: An App Service plan was created to host App Service instances. App Service plans define what compute resources are available for one or more web apps. * **Azure App Service**: An App Service instance was created in the new App Service plan to host and run the deployed application. In this case a Linux instance was created and configured to run Python apps. Additional configurations were also applied to the app service, such as setting the Postgres connection string and secret keys. * **Azure Database for PostgresSQL**: A Postgres database and server were created for the app hosted on App Service to connect to. The required admin user, network and connection settings were also configured.-* **Azure Application Insights**: Application insights was setup and configured for the app hosted on the App Service. This service enables detailed telemetry and monitoring for your application. +* **Azure Application Insights**: Application insights was set up and configured for the app hosted on the App Service. This service enables detailed telemetry and monitoring for your application. You can inspect the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project to understand how each of these resources were provisioned in more detail. The `resources.bicep` file defines most of the different services created in Azure. For example, the App Service plan and App Service web app instance were created and connected using the following Bicep code: |
app-service | Tutorial Troubleshoot Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-troubleshoot-monitor.md | Title: 'Tutorial: Troubleshoot with Azure Monitor' -description: Learn how Azure Monitor and Log Analytics helps you monitor your App Service web app. Azure Monitor maximizes the availability by delivery a comprehensive solution for monitoring your environments. +description: Learn how Azure Monitor and Log Analytics help you monitor your App Service web app. Azure Monitor maximizes the availability by delivery a comprehensive solution for monitoring your environments. Last updated 06/20/2020 # Tutorial: Troubleshoot an App Service app with Azure Monitor -This tutorial shows how to troubleshoot an [App Service](overview.md) app using [Azure Monitor](../azure-monitor/overview.md). The sample app includes code meant to exhaust memory and cause HTTP 500 errors, so you can diagnose and fix the problem using Azure Monitor. When you're finished, you'll have a sample app running on App Service on Linux integrated with [Azure Monitor](../azure-monitor/overview.md). +This tutorial shows how to troubleshoot an [App Service](overview.md) app using [Azure Monitor](../azure-monitor/overview.md). The sample app includes code meant to exhaust memory and cause HTTP 500 errors, so you can diagnose and fix the problem using Azure Monitor. When you're finished, you have a sample app running on App Service on Linux integrated with [Azure Monitor](../azure-monitor/overview.md). [Azure Monitor](../azure-monitor/overview.md) maximizes the availability and performance of your applications and services by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. You can follow the steps in this tutorial on macOS, Linux, Windows. ## Prerequisites -To complete this tutorial, you'll need: +To complete this tutorial, you need: - [Azure subscription](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) To complete this tutorial, you'll need: ## Create Azure resources -First, you run several commands locally to setup a sample app to use with this tutorial. The commands create Azure resources, create a deployment user, and deploy the sample app to Azure. You'll be prompted for the password supplied as a part of the creation of the deployment user. +First, you run several commands locally to set up a sample app to use with this tutorial. The commands create Azure resources, create a deployment user, and deploy the sample app to Azure. You're prompted for the password supplied as a part of the creation of the deployment user. ```azurecli az group create --name myResourceGroup --location "South Central US" az webapp deployment user set --user-name <username> --password <password> az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku B1 --is-linux-az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.3" --deployment-local-git +az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|8.1" --deployment-local-git az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DEPLOYMENT_BRANCH='main' git clone https://github.com/Azure-Samples/App-Service-Troubleshoot-Azure-Monitor cd App-Service-Troubleshoot-Azure-Monitor git push azure main ### Create a Log Analytics Workspace -Now that you've deployed the sample app to Azure App Service, you'll configure monitoring capability to troubleshoot the app when problems arise. Azure Monitor stores log data in a Log Analytics workspace. A workspace is a container that includes data and configuration information. +Now that you've deployed the sample app to Azure App Service, you configure monitoring capability to troubleshoot the app when problems arise. Azure Monitor stores log data in a Log Analytics workspace. A workspace is a container that includes data and configuration information. In this step, you create a Log Analytics workspace to configure Azure Monitor with your app. az monitor diagnostic-settings create --resource $resourceID \ Browse to `http://<app-name>.azurewebsites.net`. -The sample app, ImageConverter, converts included images from `JPG` to `PNG`. A bug has been deliberately placed in the code for this tutorial. If you select enough images, the the app produces a HTTP 500 error during image conversion. Imagine this scenario wasn't considered during the development phase. You'll use Azure Monitor to troubleshoot the error. +The sample app, ImageConverter, converts included images from `JPG` to `PNG`. A bug has been deliberately placed in the code for this tutorial. If you select enough images, the app produces an HTTP 500 error during image conversion. Imagine this scenario wasn't considered during the development phase. You'll use Azure Monitor to troubleshoot the error. ### Verify the app works To convert images, click `Tools` and select `Convert to PNG`.  -Select the first two images and click `convert`. This will convert successfully. +Select the first two images and click `convert`. This converts successfully.  ### Break the app -Now that you've verified the app by converting two images successfully, we'll try to convert the first five images. +Now that you've verified the app by converting two images successfully, we try to convert the first five images.  AppServiceConsoleLogs | where ResultDescription contains "error" ``` -In the `ResultDescription` column, you'll see the following error: +In the `ResultDescription` column, you see the following error: ```output PHP Fatal error: Allowed memory size of 134217728 bytes exhausted In the local directory, open the `process.php` and look at line 20. imagepng($imgArray[$x], $filename); ``` -The first argument, `$imgArray[$x]`, is a variable holding all JPGs (in-memory) needing conversion. However, `imagepng` only needs the image being converted and not all images. Pre-loading images is not necessary and may be causing the memory exhaustion, leading to HTTP 500s. Let's update the code to load images on-demand to see if it resolves the issue. Next, you will improve the code to address the memory problem. +The first argument, `$imgArray[$x]`, is a variable holding all JPGs (in-memory) needing conversion. However, `imagepng` only needs the image being converted and not all images. Pre-loading images is not necessary and may be causing the memory exhaustion, leading to HTTP 500s. Let's update the code to load images on-demand to see if it resolves the issue. Next, you improve the code to address the memory problem. ## Fix the app |
automation | Extension Based Hybrid Runbook Worker Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md | description: This article provides information about deploying the extension-bas Previously updated : 02/20/2023 Last updated : 03/21/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers. Azure Automation stores and manages runbooks and then delivers them to one or mo ### Supported operating systems -| Windows | Linux (x64)| +| Windows | Linux | |||-| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 <br> ● Windows 10 Enterprise (including multi-session) and Pro | ● Debian GNU/Linux 10 and 11 <br> ● Ubuntu 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7 and 8ΓÇ»| +| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709, and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 <br> ● Windows 10 Enterprise (including multi-session) and Pro | ● Debian GNU/Linux 8, 9, 10, and 11 <br> ● Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7, and 8ΓÇ»</br> *Hybrid Worker extension would follow support timelines of the OS vendor.| ### Other Requirements -| Windows | Linux (x64)| +| Windows | Linux | ||| | Windows PowerShell 5.1 (download WMF 5.1). PowerShell Core isn't supported.| Linux Hardening must not be enabled.ΓÇ» | | .NET Framework 4.6.2 or later.ΓÇ»| | |
azure-arc | Conceptual Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md | Because Azure Resource Manager manages your configurations, you can automate cre ## Parameters -For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation. +To see all the parameters supported by Flux in Azure, see the [`az k8s-configuration` documentation](/cli/azure/k8s-configuration). This implementation doesn't currently support every parameter that Flux supports (see the [official Flux documentation](https://fluxcd.io/docs/)). Let us know if a parameter you need is missing from the Azure implementation. -You can see the full list of parameters that the `k8s-configuration flux` Azure CLI command supports by using the `-h` parameter: -az k8 -```azurecli -az k8s-configuration flux -h --Group - az k8s-configuration flux : Commands to manage Flux v2 Kubernetes configurations. --Subgroups: - deployed-object : Commands to see deployed objects associated with Flux v2 Kubernetes - configurations. - kustomization : Commands to manage Kustomizations associated with Flux v2 Kubernetes - configurations. --Commands: - create : Create a Flux v2 Kubernetes configuration. - delete : Delete a Flux v2 Kubernetes configuration. - list : List all Flux v2 Kubernetes configurations. - show : Show a Flux v2 Kubernetes configuration. - update : Update a Flux v2 Kubernetes configuration. -``` +You can also see the full list of parameters for the `az k8s-configuration flux` by using the `-h` parameter in Azure CLI (for example, `az k8s-configuration flux -h` or `az k8s-configuration flux create -h`). -Here are the parameters for the `k8s-configuration flux create` CLI command: --```azurecli -az k8s-configuration flux create -h --This command is from the following extension: k8s-configuration --Command - az k8s-configuration flux create : Create a Flux v2 Kubernetes configuration. --Arguments - --cluster-name -c [Required] : Name of the Kubernetes cluster. - --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters. - Allowed values: connectedClusters, managedClusters. - --name -n [Required] : Name of the flux configuration. - --resource-group -g [Required] : Name of resource group. You can configure the default group - using `az configure --defaults group=<name>`. - --url -u [Required] : URL of the source to reconcile. - --bucket-insecure : Communicate with a bucket without TLS. Allowed values: false, - true. - --bucket-name : Name of the S3 bucket to sync. - --container-name : Name of the Azure Blob Storage container to sync - --interval --sync-interval : Time between reconciliations of the source on the cluster. - --kind : Source kind to reconcile. Allowed values: bucket, git, azblob. - Default: git. - --kustomization -k : Define kustomizations to sync sources with parameters ['name', - 'path', 'depends_on', 'timeout', 'sync_interval', - 'retry_interval', 'prune', 'force']. - --namespace --ns : Namespace to deploy the configuration. Default: default. - --no-wait : Do not wait for the long-running operation to finish. - --scope -s : Specify scope of the operator to be 'namespace' or 'cluster'. - Allowed values: cluster, namespace. Default: cluster. - --suspend : Suspend the reconciliation of the source and kustomizations - associated with this configuration. Allowed values: false, - true. - --timeout : Maximum time to reconcile the source before timing out. --Auth Arguments - --local-auth-ref --local-ref : Local reference to a kubernetes secret in the configuration - namespace to use for communication to the source. --Bucket Auth Arguments - --bucket-access-key : Access Key ID used to authenticate with the bucket. - --bucket-secret-key : Secret Key used to authenticate with the bucket. --Git Auth Arguments - --https-ca-cert : Base64-encoded HTTPS CA certificate for TLS communication with - private repository sync. - --https-ca-cert-file : File path to HTTPS CA certificate file for TLS communication - with private repository sync. - --https-key : HTTPS token/password for private repository sync. - --https-user : HTTPS username for private repository sync. - --known-hosts : Base64-encoded known_hosts data containing public SSH keys - required to access private Git instances. - --known-hosts-file : File path to known_hosts contents containing public SSH keys - required to access private Git instances. - --ssh-private-key : Base64-encoded private ssh key for private repository sync. - --ssh-private-key-file : File path to private ssh key for private repository sync. --Git Repo Ref Arguments - --branch : Branch within the git source to reconcile with the cluster. - --commit : Commit within the git source to reconcile with the cluster. - --semver : Semver range within the git source to reconcile with the - cluster. - --tag : Tag within the git source to reconcile with the cluster. --Global Arguments - --debug : Increase logging verbosity to show all debug logs. - --help -h : Show this help message and exit. - --only-show-errors : Only show errors, suppressing warnings. - --output -o : Output format. Allowed values: json, jsonc, none, table, tsv, - yaml, yamlc. Default: json. - --query : JMESPath query string. See http://jmespath.org/ for more - information and examples. - --subscription : Name or ID of subscription. You can configure the default - subscription using `az account set -s NAME_OR_ID`. - --verbose : Increase logging verbosity. Use --debug for full debug logs. - -Azure Blob Storage Account Auth Arguments - --sp_client_id : The client ID for authenticating a service principal with Azure Blob, required for this authentication method - --sp_tenant_id : The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method - --sp_client_secret : The client secret for authenticating a service principal with Azure Blob - --sp_client_cert : The Base64 encoded client certificate for authenticating a service principal with Azure Blob - --sp_client_cert_password : The password for the client certificate used to authenticate a service principal with Azure Blob - --sp_client_cert_send_chain : Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate - --account_key : The Azure Blob Shared Key for authentication - --sas_token : The Azure Blob SAS Token for authentication - --mi_client_id : The client ID of the managed identity for authentication with Azure Blob --Examples - Create a Flux v2 Kubernetes configuration - az k8s-configuration flux create --resource-group my-resource-group \ - --cluster-name mycluster --cluster-type connectedClusters \ - --name myconfig --scope cluster --namespace my-namespace \ - --kind git --url https://github.com/Azure/arc-k8s-demo \ - --branch main --kustomization name=my-kustomization -- Create a Kubernetes v2 Flux Configuration with Bucket Source Kind - az k8s-configuration flux create --resource-group my-resource-group \ - --cluster-name mycluster --cluster-type connectedClusters \ - --name myconfig --scope cluster --namespace my-namespace \ - --kind bucket --url https://bucket-provider.minio.io \ - --bucket-name my-bucket --kustomization name=my-kustomization \ - --bucket-access-key my-access-key --bucket-secret-key my-secret-key - - Create a Kubernetes v2 Flux Configuration with Azure Blob Storage Source Kind - az k8s-configuration flux create --resource-group my-resource-group \ - --cluster-name mycluster --cluster-type connectedClusters \ - --name myconfig --scope cluster --namespace my-namespace \ - --kind azblob --url https://mystorageaccount.blob.core.windows.net \ - --container-name my-container --kustomization name=my-kustomization \ - --account-key my-account-key -``` +The following information describes some of the parameters and arguments available for the `az k8s-configuration flux create` command. ### Configuration general arguments | Parameter | Format | Notes | | - | - | - | | `--cluster-name` `-c` | String | Name of the cluster resource in Azure. |-| `--cluster-type` `-t` | `connectedClusters`, `managedClusters` | Use `connectedClusters` for Azure Arc-enabled Kubernetes clusters and `managedClusters` for AKS clusters. | -| `--resource-group` `-g` | String | Name of the Azure resource group that holds the Azure Arc or AKS cluster resource. | +| `--cluster-type` `-t` | Allowed values: `connectedClusters`, `managedClusters`, `provisionedClusters` | Use `connectedClusters` for Azure Arc-enabled Kubernetes clusters, `managedClusters` for AKS clusters, or `provisionedClusters` for [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) (installing extensions on these clusters is currently in preview). | +| `--resource-group` `-g` | String | Name of the Azure resource group that holds the cluster resource. | | `--name` `-n`| String | Name of the Flux configuration in Azure. | | `--namespace` `--ns` | String | Name of the namespace to deploy the configuration. Default: `default`. | | `--scope` `-s` | String | Permission scope for the operators. Possible values are `cluster` (full access) or `namespace` (restricted access). Default: `cluster`. For on-premises repositories, Flux uses `libgit2`. ### Kustomization -By using `az k8s-configuration flux kustomization create`, you can create one or more kustomizations during the configuration. +By using [`az k8s-configuration flux kustomization create`](/cli/azure/k8s-configuration/flux/kustomization#az-k8s-configuration-flux-kustomization-create), you can create one or more kustomizations during the configuration. | Parameter | Format | Notes | | - | - | - | By using `az k8s-configuration flux kustomization create`, you can create one or | `validation` | String | Values: `none`, `client`, `server`. Default: `none`. See [Flux documentation](https://fluxcd.io/docs/) for details.| | `force` | Boolean | Default: `false`. Set `force=true` to instruct the kustomize controller to re-create resources when patching fails because of an immutable field change. | -You can also use `az k8s-configuration flux kustomization` to create, update, list, show, and delete kustomizations in a Flux configuration: --```console -az k8s-configuration flux kustomization -h --Group - az k8s-configuration flux kustomization : Commands to manage Kustomizations associated with Flux - v2 Kubernetes configurations. --Commands: - create : Create a Kustomization associated with a Flux v2 Kubernetes configuration. - delete : Delete a Kustomization associated with a Flux v2 Kubernetes configuration. - list : List Kustomizations associated with a Flux v2 Kubernetes configuration. - show : Show a Kustomization associated with a Flux v2 Kubernetes configuration. - update : Update a Kustomization associated with a Flux v2 Kubernetes configuration. -``` --Here are the kustomization creation options: --```azurecli -az k8s-configuration flux kustomization create -h --This command is from the following extension: k8s-configuration --Command - az k8s-configuration flux kustomization create : Create a Kustomization associated with a - Kubernetes Flux v2 Configuration. --Arguments - --cluster-name -c [Required] : Name of the Kubernetes cluster. - --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters. - Allowed values: connectedClusters, managedClusters. - --kustomization-name -k [Required] : Specify the name of the kustomization to target. - --name -n [Required] : Name of the flux configuration. - --resource-group -g [Required] : Name of resource group. You can configure the default - group using `az configure --defaults group=<name>`. - --dependencies --depends --depends-on : Comma-separated list of kustomization dependencies. - --force : Re-create resources that cannot be updated on the - cluster (i.e. jobs). Allowed values: false, true. - --interval --sync-interval : Time between reconciliations of the kustomization on the - cluster. - --no-wait : Do not wait for the long-running operation to finish. - --path : Specify the path in the source that the kustomization - should apply. - --prune : Garbage collect resources deployed by the kustomization - on the cluster. Allowed values: false, true. - --retry-interval : Time between reconciliations of the kustomization on the - cluster on failures, defaults to --sync-interval. - --timeout : Maximum time to reconcile the kustomization before - timing out. --Global Arguments - --debug : Increase logging verbosity to show all debug logs. - --help -h : Show this help message and exit. - --only-show-errors : Only show errors, suppressing warnings. - --output -o : Output format. Allowed values: json, jsonc, none, - table, tsv, yaml, yamlc. Default: json. - --query : JMESPath query string. See http://jmespath.org/ for more - information and examples. - --subscription : Name or ID of subscription. You can configure the - default subscription using `az account set -s - NAME_OR_ID`. - --verbose : Increase logging verbosity. Use --debug for full debug - logs. --Examples - Create a Kustomization associated with a Kubernetes v2 Flux Configuration - az k8s-configuration flux kustomization create --resource-group my-resource-group \ - --cluster-name mycluster --cluster-type connectedClusters --name myconfig \ - --kustomization-name my-kustomization-2 --path ./my/path --prune --force -``` +You can also use [`az k8s-configuration flux kustomization`](/cli/azure/k8s-configuration/flux/kustomization) to update, list, show, and delete kustomizations in a Flux configuration. ## Multi-tenancy |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/network-requirements.md | Depending on your scenario, you may need connectivity to other URLs, such as tho - [Azure portal URLs](../../azure-portal/azure-portal-safelist-urls.md) - [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints) -For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md). +For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements](../network-requirements-consolidated.md). ## Next steps -- Learn about other [requirements for Arc-enabled Kubernetes](system-requirements.md).+- Understand [system requirements for Arc-enabled Kubernetes](system-requirements.md). - Use our [quickstart](quickstart-connect-cluster.md) to connect your cluster. - Review [frequently asked questions](faq.md) about Arc-enabled Kubernetes. |
azure-arc | Tutorial Akv Secrets Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md | Capabilities of the Azure Key Vault Secrets Provider extension include: You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying ARM template. +> [!TIP] +> If the cluster is behind an outbound proxy server, ensure that you connect it to Azure Arc using the [proxy configuration](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) option before installing the extension. + > [!TIP] > Only one instance of the extension can be deployed on each Azure Arc-enabled Kubernetes cluster. az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_ You can use other configuration settings as needed for your deployment. For example, to change the kubelet root directory while creating a cluster, modify the az k8s-extension create command: ```azurecli-interactive-az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings linux.kubeletRootDir=/path/to/kubelet secrets-store-csi-driver.enable secrets-store-csi-driver.linux.kubeletRootDir=/path/to/kubelet +az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings linux.kubeletRootDir=/path/to/kubelet secrets-store-csi-driver.linux.kubeletRootDir=/path/to/kubelet ``` |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md | -Azure Arc simplifies governance and management by delivering a consistent multi-cloud and on-premises management platform. +Azure Arc simplifies governance and management by delivering a consistent multicloud and on-premises management platform. Azure Arc provides a centralized, unified way to: For information, see the [Azure pricing page](https://azure.microsoft.com/pricin * Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md). * Experience Azure Arc by exploring the [Azure Arc Jumpstart](https://aka.ms/AzureArcJumpstart). * Learn about best practices and design patterns trough the various [Azure Arc Landing Zone Accelerators](https://aka.ms/ArcLZAcceleratorReady).+* Understand [network requirements for Azure Arc](network-requirements-consolidated.md). |
azure-arc | Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/deploy-cli.md | -## az arcappliance createconfig +## `az arcappliance createconfig` This command creates the configuration files used by Arc resource bridge. Credentials that are provided during `createconfig`, such as vCenter credentials for VMware vSphere, are stored in a configuration file and locally within Arc resource bridge. These credentials should be a separate user account used only by Arc resource bridge, with permission to view, create, delete, and manage on-premises resources. If the credentials change, then the credentials on the resource bridge should be updated. This command also calls the `validate` command to check the configuration files. > [!NOTE] > Azure Stack HCI and Hybrid AKS use different commands to create the Arc resource bridge configuration files. -## az arcappliance validate +## `az arcappliance validate` -The `validate` command checks the configuration files for a valid schema, cloud and core validations (such as management machine connectivity to required URLs), network settings, and proxy settings. It also performs tests on identity privileges and role assignments, network configuration, load balancer configuration and content delivery network connectivity. +The `validate` command checks the configuration files for a valid schema, cloud and core validations (such as management machine connectivity to [required URLs](network-requirements.md)), network settings, and proxy settings. It also performs tests on identity privileges and role assignments, network configuration, load balancer configuration and content delivery network connectivity. -## az arcappliance prepare +## `az arcappliance prepare` This command downloads the OS images from Microsoft that are used to deploy the on-premises appliance VM. Once downloaded, the images are then uploaded to the local cloud image gallery to prepare for the creation of the appliance VM. This command takes about 10-30+ minutes to complete, depending on the network speed. Allow the command to complete before continuing with the deployment. -## az arcappliance deploy +## `az arcappliance deploy` The `deploy` command deploys an on-premises instance of Arc resource bridge as an appliance VM, bootstrapped to be a Kubernetes management cluster. This command gets all necessary pods and agents within the Kubernetes cluster into a running state. Once the appliance VM is up, the kubeconfig file is generated. -## az arcappliance create +## `az arcappliance create` This command creates Arc resource bridge in Azure as an ARM resource, then establishes the connection between the ARM resource and on-premises appliance VM. Once the `create` command initiates the connection, it will return in the terminal, even though the connection between the ARM resource and on-premises appliance VM is not yet complete. The resource bridge needs about 5 minutes to establish the connection between the ARM resource and the on-premises VM. -## az arcappliance show +## `az arcappliance show` The `show` command gets the status of the Arc resource bridge and ARM resource information. It can be used to check the progress of the connection between the ARM resource and on-premises appliance VM. While the Arc resource bridge is connecting the ARM resource to the on-premises Successful Arc resource bridge creation results in `ProvisioningState = Succeeded` and `Status = Running`. -## az arcappliance delete +## `az arcappliance delete` This command deletes the appliance VM and Azure resources. It doesn't clean up the OS image, which remains in the on-premises cloud gallery. |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md | Arc resource bridge supports the following Azure regions: * Australia East * Southeast Asia - ### Regional resiliency While Azure has a number of redundancy features at every level of failure, if a service impacting event occurs, this preview release of Azure Arc resource bridge does not support cross-region failover or other resiliency capabilities. In the event of the service becoming unavailable, the on-premises VMs continue to operate unaffected. Management from Azure is unavailable during that service outage. The following private cloud environments and their versions are officially suppo ### Networking -Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol. --You may need to allow specific URLs to [ensure outbound connectivity is not blocked](troubleshoot-resource-bridge.md#restricted-outbound-connectivity) by your firewall or proxy server. --For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md). +Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol. You may need to allow specific URLs to [ensure outbound connectivity is not blocked](troubleshoot-resource-bridge.md#restricted-outbound-connectivity) by your firewall or proxy server. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md). ## Next steps |
azure-arc | Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/security-overview.md | Title: Azure Arc resource bridge (preview) security overview description: Security information about Azure resource bridge (preview). Previously updated : 08/25/2022 Last updated : 03/23/2023 # Azure Arc resource bridge (preview) security overview The [activity log](../../azure-monitor/essentials/activity-log.md) is an Azure p ## Next steps -- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details.+- Understand [system requirements](system-requirements.md) and [network requirements](network-requirements.md) for Azure Arc resource bridge (preview). +- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about features and benefits. - Learn more about [Azure Arc](../overview.md). |
azure-arc | System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md | Title: Azure Arc resource bridge (preview) system requirements description: Learn about system requirements for Azure Arc resource bridge (preview). Previously updated : 02/15/2023 Last updated : 03/23/2023 # Azure Arc resource bridge (preview) system requirements When deploying Arc resource bridge with AKS on Azure Stack HCI (AKS Hybrid), the ## Next steps -- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details.+- Understand [network requirements for Azure Arc resource bridge (preview)](network-requirements.md). +- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about features and benefits. - Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).- |
azure-arc | Onboard Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md | The Azure Arc service in the Azure portal provides a streamlined way to create a You can use [Azure PowerShell](/powershell/azure/install-az-ps) to create a service principal with the [New-AzADServicePrincipal](/powershell/module/Az.Resources/New-AzADServicePrincipal) cmdlet. -1. Run the following command. You must store the output of the [`New-AzADServicePrincipal`](/powershell/module/az.resources/new-azadserviceprincipal) cmdlet in a variable, or you will not be able to retrieve the password needed in a later step. -+1. Check the context of your Azure PowerShell session to ensure you're working in the correct subscription. Use [Set-AzContext](/powershell/module/az.accounts/set-azcontext) if you need to change the subscription. + ```azurepowershell-interactive- $sp = New-AzADServicePrincipal -DisplayName "Arc-for-servers" -Role "Azure Connected Machine Onboarding" - $sp - ``` -- ```output - Secret : System.Security.SecureString - ServicePrincipalNames : {ad9bcd79-be9c-45ab-abd8-80ca1654a7d1, https://Arc-for-servers} - ApplicationId : ad9bcd79-be9c-45ab-abd8-80ca1654a7d1 - ObjectType : ServicePrincipal - DisplayName : Hybrid-RP - Id : 5be92c87-01c4-42f5-bade-c1c10af87758 - Type : + Get-AzContext ```--2. To retrieve the password stored in the `$sp` variable, run the following command: -+ +1. Run the following command to create a service principal and assign it the Azure Connected Machine Onboarding role for the selected subscription. After the service principal is created, it will print the application ID and secret. The secret is valid for 1 year, after which you'll need to generate a new secret and update any scripts with the new secret. + ```azurepowershell-interactive- $credential = New-Object pscredential -ArgumentList "temp", $sp.Secret - $credential.GetNetworkCredential().password + $sp = New-AzADServicePrincipal -DisplayName "Arc server onboarding account" -Role "Azure Connected Machine Onboarding" + $sp | Format-Table AppId, @{ Name = "Secret"; Expression = { $_.PasswordCredentials.SecretText }} + ``` + ```output + AppId Secret + -- + aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee PASSWORD_SHOWN_HERE ``` -3. In the output, find the values for the fields **password** and **ApplicationId**. You'll need these values later, so save them in a secure place. If you forget or lose your service principal password, you can reset it using the [`New-AzADSpCredential`](/powershell/module/az.resources/new-azadspcredential) cmdlet. --The values from the following properties are used with parameters passed to the `azcmagent`: --- The value from the **ApplicationId** property is used for the `--service-principal-id` parameter value-- The value from the **password** property is used for the `--service-principal-secret` parameter used to connect the agent.--> [!TIP] -> Make sure to use the service principal **ApplicationId** property, not the **Id** property. --4. Assign the **Azure Connected Machine Onboarding** role to the service principal for the designated resource group or subscription. This role contains only the permissions required to onboard a machine. Note that your account must be a member of the **Owner** or **User Access Administrator** role for the subscription to which the service principal will have access. For information on how to add role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md). + The values from the following properties are used with parameters passed to the `azcmagent`: + + - The value from the **AppId** property is used for the `--service-principal-id` parameter value + - The value from the **Secret** property is used for the `--service-principal-secret` parameter used to connect the agent. ## Generate the installation script from the Azure portal After you install the agent and configure it to connect to Azure Arc-enabled ser  ++++++++++ ## Next steps - Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. - Learn how to [troubleshoot agent connection issues](troubleshoot-agent-onboard.md). - Learn how to manage your machines using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that machines are reporting to the expected Log Analytics workspace, monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and more.+``` ++``` |
azure-cache-for-redis | Cache How To Active Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md | Active geo-replication groups up to five instances of Enterprise Azure Cache for > Data transfer between Azure regions is charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/). > -> [!IMPORTANT] -> The FLUSHALL and FLUSHDB commands are blocked when using active geo-replication to prevent accidental data loss across replicated cache instances. -> +## Scope of availability ++|Tier | Basic, Standard | Premium |Enterprise, Enterprise Flash | +| |::|:-:|::| +|Available | No | No | Yes | +++|Tier | Available| +|:|::| +|Basic, Standard | No | +|Premium | No | +|Enterprise, Enterprise Flash| Yes | +++The Premium tier of Azure Cache for Redis offers a version of geo-replication called [_passive geo-replication_](cache-how-to-geo-replication.md). Passive geo-replication provides an active-passive configuration. ++## Active geo-replication prerequisites ++There are a few restrictions when using active geo replication: +- Only the [RediSearch](cache-redis-modules.md#redisearch) and [RedisJSON](cache-redis-modules.md#redisjson) modules are supported +- On the _Enterprise Flash_ tier, only the _No Eviction_ eviction policy can be used. All eviction policies are supported on the _Enterprise_ tier. +- Data persistence isn't supported because active geo-replication provides a superior experience. +- You can't add an existing (that is, running) cache to a geo-replication group. You can only add a cache to a geo-replication group when you create the cache. +- All caches within a geo-replication group must have the same configuration. For example, all caches must have the same SKU, capacity, eviction policy, clustering policy, modules, and TLS setting. +- You can't use the `FLUSHALL` and `FLUSHDB` Redis commands when using active geo-replication. Prohibiting the commands prevents unintended deletion of data. Use the [flush control plane operation](#flush-operation) instead. ## Create or join an active geo-replication group Active geo-replication groups up to five instances of Enterprise Azure Cache for ## Remove from an active geo-replication group -To remove a cache instance from an active geo-replication group, you just delete the instance. The remaining instances will reconfigure themselves automatically. +To remove a cache instance from an active geo-replication group, you just delete the instance. The remaining instances then reconfigure themselves automatically. ## Force-unlink if there's a region outage You should remove the unavailable cache because the remaining caches in the repl ### Azure CLI -Use the Azure CLI for creating a new cache and geo-replication group, or to add a new cache to an existing geo-replication group. For more information, see [az redisenterprise create](/cli/azure/redisenterprise#az-redisenterprise-create). +Use the Azure CLI to create a new cache and geo-replication group, or to add a new cache to an existing geo-replication group. For more information, see [az redisenterprise create](/cli/azure/redisenterprise#az-redisenterprise-create). #### Create new Enterprise instance in a new geo-replication group using Azure CLI To configure active geo-replication properly, the ID of the cache instance being #### Create new Enterprise instance in an existing geo-replication group using Azure CLI -This example creates a new Cache for Redis Enterprise E10 instance called _Cache2_ in the West US region. Then, the cache is added to the `replicationGroup` active geo-replication group created above. This way, it's linked in an active-active configuration with Cache1. +This example creates a new Enterprise E10 cache instance called _Cache2_ in the West US region. Then, the script adds the cache to the `replicationGroup` active geo-replication group create in a previous procedure. This way, it's linked in an active-active configuration with _Cache1_. ```azurecli-interactive az redisenterprise create --location "West US" --cluster-name "Cache2" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache2/databases/default" Use Azure PowerShell to create a new cache and geo-replication group, or to add #### Create new Enterprise instance in a new geo-replication group using PowerShell -This example creates a new Azure Cache for Redis Enterprise E10 cache instance called "Cache1" in the East US region. Then, the cache is added to a new active geo-replication group called _replicationGroup_: +This example creates a new Azure Cache for Redis Enterprise E10 cache instance called _Cache1_ in the East US region. Then, the cache is added to a new active geo-replication group called _replicationGroup_: ```powershell-interactive New-AzRedisEnterpriseCache -Name "Cache1" -ResourceGroupName "myResourceGroup" -Location "East US" -Sku "Enterprise_E10" -GroupNickname "replicationGroup" -LinkedDatabase '{id:"/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"}' To configure active geo-replication properly, the ID of the cache instance being #### Create new Enterprise instance in an existing geo-replication group using PowerShell -This example creates a new Azure Cache for Redis E10 instance called _Cache2_ in the West US region. Then, the cache is added to the "replicationGroup" active geo-replication group created above. This way, it's linked in an active-active configuration with _Cache1_. +This example creates a new Enterprise E10 cache instance called _Cache2_ in the West US region. Then, the script adds the cache to the "replicationGroup" active geo-replication group created in the previous procedure. the links the two caches, _Cache1_ and _Cache2_, in an active-active configuration. ```powershell-interactive New-AzRedisEnterpriseCache -Name "Cache2" -ResourceGroupName "myResourceGroup" -Location "West US" -Sku "Enterprise_E10" -GroupNickname "replicationGroup" -LinkedDatabase '{id:"/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"}', '{id:"/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache2/databases/default"}' New-AzRedisEnterpriseCache -Name "Cache2" -ResourceGroupName "myResourceGroup" - As before, you need to list both _Cache1_ and _Cache2_ using the `-LinkedDatabase` parameter. +## Flush operation ++Due to the potential for inadvertent data loss, you can't use the `FLUSHALL` and `FLUSHDB` Redis commands with any cache instance residing in a geo-replication group. Instead, use the **Flush Cache(s)** button located at the top of the **Active geo-replication** working pane. +++> [!IMPORTANT] +> Be careful when using the **Flush Caches** feature. Selecting the button removes all data from the current cache and from ALL linked caches in the geo-replication group. +> ++Manage access to the feature using [Azure role-based access control](../role-based-access-control/overview.md). Only authorized users should be given access to flush all caches. + ## Next steps Learn more about Azure Cache for Redis features. |
azure-cache-for-redis | Cache Monitor Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-monitor-diagnostic-settings.md | -Diagnostic settings in Azure are used to collect resource logs. Azure resource Logs are emitted by a resource and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as "data plane logs". The content of these logs varies by resource type. +Diagnostic settings in Azure are used to collect resource logs. An Azure resource emits resource logs and provides rich, frequent data about the operation of that resource. These logs are captured per request and are also referred to as "data plane logs". See [diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for a recommended overview of the functionality in Azure. The content of these logs varies by resource type. In Azure Cache for Redis, two options are available to log: -Azure Cache for Redis uses Azure diagnostic settings to log information on all client connections to your cache. Logging and analyzing this diagnostic setting helps you understand who is connecting to your caches and the timestamp of those connections. The log data could be used to identify the scope of a security breach and for security auditing purposes. +- **Cache Metrics** (that is "AllMetrics") used to [log metrics from Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal) +- **Connection Logs** logs connections to the cache for security and diagnostic purposes. -Once configured, your cache starts to log incoming client connections by IP address. It also logs the number of connections originating from each unique IP address. The logs aren't cumulative. They represent point-in-time snapshots taken at 10-second intervals. +## Scope of availability ++|Tier | Basic, Standard, and Premium | Enterprise and Enterprise Flash | +|||| +|Cache Metrics | Yes | Yes | +|Connection Logs | Yes | Yes (preview) | ++## Cache Metrics ++Azure Cache for Redis emits [many metrics](cache-how-to-monitor.md#list-of-metrics) such as _Server Load_ and _Connections per Second_ that are useful to log. Selecting the **AllMetrics** option allows these and other cache metrics to be logged. You can configure how long the metrics are retained. See [here for an example of exporting cache metrics to a storage account](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics). ++## Connection Logs ++Azure Cache for Redis uses Azure diagnostic settings to log information on client connections to your cache. Logging and analyzing this diagnostic setting helps you understand who is connecting to your caches and the timestamp of those connections. The log data could be used to identify the scope of a security breach and for security auditing purposes. ++## Differences Between Azure Cache for Redis Tiers ++Implementation of connection logs is slightly different between tiers: +- **Basic, Standard, and Premium-tier caches** polls client connections by IP address, including the number of connections originating from each unique IP address. These logs aren't cumulative. They represent point-in-time snapshots taken at 10-second intervals. Authentication events (successful and failed) and disconnection events aren't logged in these tiers. +- **Enterprise and Enterprise Flash-tier caches** use the [audit connection events](https://docs.redis.com/latest/rs/security/audit-events/) functionality built-into Redis Enterprise. Audit connection events allow every connection, disconnection, and authentication event to be logged, including failed authentication events. ++The connection logs produced look similar among the tiers, but have some differences. The two formats are shown in more detail later in the article. ++> [!IMPORTANT] +> The connection logging in the Basic, Standard, and Premium tiers _polls_ the current client connections in the cache. The same client IP addresses appears over and over again. Logging in the Enterprise and Enterprise Flash tiers is focused on each connection _event_. Logs only occur when the actual event occurred for the first time. +> ++## Prerequisites/Limitations of Connection Logging ++### Basic, Standard, and Premium tiers +- Because connection logs in these tiers consist of point-in-time snapshots taken every 10 seconds, connections that are established and removed in-between 10-second intervals aren't logged. +- Authentication events aren't logged. +- All diagnostic settings may take up to [90 minutes](../azure-monitor/essentials/diagnostic-settings.md#time-before-telemetry-gets-to-destination) to start flowing to your selected destination. +- Enabling connection logs can cause a small performance degradation to the cache instance. +- Only the _Analytics Logs_ pricing plan is supported when streaming logs to Azure Log Analytics. For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). ++### Enterprise and Enterprise Flash tiers +- When you use **OSS Cluster Policy**, logs are emitted from each data node. When you use **Enterprise Cluster Policy**, only the node being used as a proxy emits logs. Both versions still cover all connections to the cache. This is just an architectural difference. +- Data loss (that is, missing a connection event) is rare, but possible. Data loss is typically caused by networking issues. +- Disconnection logs aren't yet fully stable and events may be missed. +- Because connection logs on the Enterprise tiers are event-based, be careful of your retention policies. For instance, if retention is set to 10 days, and a connection event occurred 15 days ago, that connection might still exist, but the log for that connection isn't retained. +- If using [active geo-replication](cache-how-to-active-geo-replication.md), logging must be configured for each cache instance in the geo-replication group individually. +- All diagnostic settings may take up to [90 minutes](../azure-monitor/essentials/diagnostic-settings.md#time-before-telemetry-gets-to-destination) to start flowing to your selected destination. +- Enabling connection logs may cause a small performance degradation to the cache instance. ++> [!NOTE] +> It is always possible to use the [INFO](https://redis.io/commands/info/) or [CLIENT LIST](https://redis.io/commands/client-list/) commands to check who is connected to a cache instance on-demand. +> ++> [!IMPORTANT] +> When selecting logs, you can chose either the specific _Category_ or _Category groups_, which are predefined groupings of logs across Azure services. When you use _Category groups_, [you can no longer configure the retention settings](../azure-monitor/essentials/diagnostic-settings.md#resource-logs). If you need to determine retention duration for your connection logs, select the item in the _Categories_ section instead. +> ++## Log Destinations You can turn on diagnostic settings for Azure Cache for Redis instances and send resource logs to the following destinations: - **Log Analytics workspace** - doesn't need to be in the same region as the resource being monitored.-- **Storage account** - must be in the same region as the cache.+- **Storage account** - must be in the same region as the cache. [Premium storage accounts are not supported](../azure-monitor/essentials/diagnostic-settings.md#destination-limitations) as a destination, however. - **Event hub** - diagnostic settings can't access event hub resources when virtual networks are enabled. Enable the **Allow trusted Microsoft services to bypass this firewall?** setting in event hubs to grant access to your event hub resources. The event hub must be in the same region as the cache.+- **Partner Solution** - a list of potential partner logging solutions can be found [here](../partner-solutions/partners.md) For more information on diagnostic requirements, see [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md?tabs=CMD). -You'll be charged normal data rates for storage account and event hub usage when you send diagnostic logs to either destination. You're billed under Azure Monitor not Azure Cache for Redis. When sending logs to **Log Analytics**, you're only charged for Log Analytics data ingestion. +You're charged normal data rates for storage account and event hub usage when you send diagnostic logs to either destination. You're billed under Azure Monitor not Azure Cache for Redis. When sending logs to **Log Analytics**, you're only charged for Log Analytics data ingestion. For more pricing information, [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). -## Create diagnostics settings via the Azure portal +## Enable connection logging using the Azure portal ++### [Portal with Basic, Standard, and Premium tiers](#tab/basic-standard-premium) 1. Sign into the [Azure portal](https://portal.azure.com). For more pricing information, [Azure Monitor pricing](https://azure.microsoft.co :::image type="content" source="media/cache-monitor-diagnostic-settings/cache-monitor-diagnostic-setting.png" alt-text="Select diagnostics"::: -1. In the **Diagnostic settings** pane, select **ConnectedClientList** from **Category details**. +1. In the **Diagnostic settings** pane, select **ConnectedClientList** from **Categories**. - |Category | Definition | Key Properties | - |||| - |ConnectedClientList | IP addresses and counts of clients connected to the cache, logged at a regular interval. | `connectedClients` and nested within: `ip`, `count`, `privateLinkIpv6` | + For more detail on the data logged, see below [Contents of the Connection Logs](#contents-of-the-connection-logs). - For more detail on other fields, see below [Resource Logs](#resource-logs). --1. Once you select your **Categories details**, send your logs to your preferred destination. Select the information on the right. +1. Once you select **ConnectedClientList**, send your logs to your preferred destination. Select the information in the working pane. :::image type="content" source="media/cache-monitor-diagnostic-settings/diagnostics-resource-specific.png" alt-text="Select enable resource-specific"::: -## Create diagnostic setting via REST API +### [Portal with Enterprise and Enterprise Flash tiers (preview)](#tab/enterprise-enterprise-flash) ++1. Sign into the [Azure portal](https://portal.azure.com). ++1. Navigate to your Azure Cache for Redis account. Open the **Diagnostic Settings - Auditing** pane under the **Monitoring** section on the left. Then, select **Add diagnostic setting**. + :::image type="content" source="media/cache-monitor-diagnostic-settings/cache-enterprise-auditing.png" alt-text="Screenshot of Diagnostic settings - Auditing selected in the Resource menu."::: -Use the Azure Monitor REST API for creating a diagnostic setting via the interactive console. For more information, see [Create or update](/rest/api/monitor/diagnostic-settings/create-or-update). +1. In the **Diagnostic Setting - Auditing** pane, select **Connection events** from **Categories**. ++ For more detail on the data logged, see below [Contents of the Connection Logs](#contents-of-the-connection-logs). ++1. Once you select **Connection events**, send your logs to your preferred destination. Select the information in the working pane. + :::image type="content" source="media/cache-monitor-diagnostic-settings/cache-enterprise-connection-events.png" alt-text="Screenshot showing Connection events being checked in working pane."::: ++ + -### Request +## Enable connection logging using the REST API ++### [REST API with Basic, Standard, and Premium tiers](#tab/basic-standard-premium) ++Use the Azure Monitor REST API for creating a diagnostic setting via the interactive console. For more information, see [Create or update](/rest/api/monitor/diagnostic-settings/create-or-update?tabs=HTTP). ++#### Request ```http PUT https://management.azure.com/{resourceUri}/providers/Microsoft.Insights/diagnosticSettings/{name}?api-version=2017-05-01-preview ``` -### Headers +#### Headers | Parameters/Headers | Value/Description | ||| PUT https://management.azure.com/{resourceUri}/providers/Microsoft.Insights/diag | `api-version` | 2017-05-01-preview | | `Content-Type` | application/json | -### Body +#### Body ```json { PUT https://management.azure.com/{resourceUri}/providers/Microsoft.Insights/diag } ``` -## Create diagnostic setting via Azure CLI +### [REST API with Enterprise and Enterprise Flash tiers (preview)](#tab/enterprise-enterprise-flash) ++Use the Azure Monitor REST API for creating a diagnostic setting via the interactive console. For more information, see [Create or update](/rest/api/monitor/diagnostic-settings/create-or-update?tabs=HTTP). ++#### Request ++```http +PUT https://management.azure.com/{resourceUri}/providers/Microsoft.Insights/diagnosticSettings/{name}?api-version=2017-05-01-preview +``` ++#### Headers ++ | Parameters/Headers | Value/Description | + ||| + | `name` | The name of your diagnostic setting. | + | `resourceUri` | subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Cache/RedisEnterprise/{CACHE_NAME}/databases/default | + | `api-version` | 2017-05-01-preview | + | `Content-Type` | application/json | ++#### Body ++```json +{ + "properties": { + "storageAccountId": "/subscriptions/df602c9c-7aa0-407d-a6fb-eb20c8bd1192/resourceGroups/apptest/providers/Microsoft.Storage/storageAccounts/myteststorage", + "eventHubAuthorizationRuleID": "/subscriptions/1a66ce04-b633-4a0b-b2bc-a912ec8986a6/resourceGroups/montest/providers/microsoft.eventhub/namespaces/mynamespace/authorizationrules/myrule", + "eventHubName": "myeventhub", + "marketplacePartnerId": "/subscriptions/abcdeabc-1234-1234-ab12-123a1234567a/resourceGroups/test-rg/providers/Microsoft.Datadog/monitors/mydatadog", + "workspaceId": "/subscriptions/4b9e8510-67ab-4e9a-95a9-e2f1e570ea9c/resourceGroups/insights integration/providers/Microsoft.OperationalInsights/workspaces/myworkspace", + "logs": [ + { + "category": "ConnectionEvents", + "enabled": true, + "retentionPolicy": { + "enabled": false, + "days": 0 + } + } + ] + } +} ++``` ++++## Enable Connection Logging using Azure CLI -Use the `az monitor diagnostic-settings create` command to create a diagnostic setting with the Azure CLI. For more for information on command and parameter descriptions, see [Create diagnostic settings to send platform logs and metrics to different destinations](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create). +### [Azure CLI with Basic, Standard, and Premium tiers](#tab/basic-standard-premium) ++Use the `az monitor diagnostic-settings create` command to create a diagnostic setting with the Azure CLI. For more for information on command and parameter descriptions, see [Create diagnostic settings to send platform logs and metrics to different destinations](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create). This example shows how to use the Azure CLI to stream data to four different endpoints: ```azurecli az monitor diagnostic-settings create - --resource /subscriptions/1a66ce04-b633-4a0b-b2bc-a912ec8986a6/resourceGroups/montest/providers/Microsoft.Cache/Redis/myname - --name constoso-setting + --resource /subscriptions/{subscriptionID}/resourceGroups/{resourceGroupname}/providers/Microsoft.Cache/Redis/{cacheName} + --name {logName} --logs '[{"category": "ConnectedClientList","enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' - --event-hub MyEventHubName - --event-hub-rule /subscriptions/1a66ce04-b633-4a0b-b2bc-a912ec8986a6/resourceGroups/montest/providers/microsoft.eventhub/namespaces/mynamespace/authorizationrules/RootManageSharedAccessKey - --storage-account /subscriptions/1a66ce04-b633-4a0b-b2bc-a912ec8986a6/resourceGroups/montest/providers/Microsoft.Storage/storageAccounts/myuserspace - --workspace /subscriptions/4b9e8510-67ab-4e9a-95a9-e2f1e570ea9c/resourceGroups/insights-integration/providers/Microsoft.OperationalInsights/workspaces/myworkspace + --event-hub {eventHubName} + --event-hub-rule /subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/microsoft.eventhub/namespaces/{eventHubNamespace}/authorizationrule/{ruleName} + --storage-account /subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{storageAccountName} + --workspace /subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{logAnalyticsWorkspaceName} + --marketplace-partner-id/subscriptions/{subscriptionID}/resourceGroups{resourceGroupname}/proviers/Microsoft.Datadog/monitors/mydatadog +``` ++### [Azure CLI with Enterprise and Enterprise Flash tiers (preview)](#tab/enterprise-enterprise-flash) ++Use the `az monitor diagnostic-settings create` command to create a diagnostic setting with the Azure CLI. For more for information on command and parameter descriptions, see [Create diagnostic settings to send platform logs and metrics to different destinations](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create). This example shows how to use the Azure CLI to stream data to four different endpoints: ++```azurecli +az monitor diagnostic-settings create + --resource /subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Cache/redisenterprise/{cacheName}/databases/default + --name {logName} + --logs '[{"category": "ConnectionEvents","enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' + --event-hub {eventHubName} + --event-hub-rule /subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/microsoft.eventhub/namespaces/{eventHubNamespace}/authorizationrule/{ruleName} + --storage-account /subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{storageAccountName} + --workspace /subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{logAnalyticsWorkspaceName} + --marketplace-partner-id/subscriptions/{subscriptionID}/resourceGroups{resourceGroupname}/proviers/Microsoft.Datadog/monitors/mydatadog ``` -## Resource Logs +++## Contents of the Connection Logs +### [Connection Log Contents for Basic, Standard, and Premium tiers](#tab/basic-standard-premium) These fields and properties appear in the `ConnectedClientList` log category. In **Azure Monitor**, logs are collected in the `ACRConnectedClientList` table under the resource provider name of `MICROSOFT.CACHE`. | Azure Storage field or property | Azure Monitor Logs property | Description | These fields and properties appear in the `ConnectedClientList` log category. In | `connectedClients.privateLinkIpv6` | `PrivateLinkIpv6` | The Redis client private link IPv6 address (if applicable). | | `connectedClients.count` | `ClientCount` | The number of Redis client connections from the associated IP address. | -### Sample storage account log +#### Sample storage account log If you send your logs to a storage account, the contents of the logs look like this. If you send your logs to a storage account, the contents of the logs look like t } ``` +### [Connection Log Contents for Enterprise and Enterprise Flash tiers (preview)](#tab/enterprise-enterprise-flash) ++These fields and properties appear in the `ConnectionEvents` log category. In **Azure Monitor**, logs are collected in the `REDConnectionEvents` table under the resource provider name of `MICROSOFT.CACHE`. ++| Azure Storage field or property | Azure Monitor Logs property | Description | +| | | | +| `time` | `TimeGenerated` | The timestamp (UTC) when event log was captured. | +| `location` | `Location` | The location (region) the Azure Cache for Redis instance was accessed in. | +| `category` | n/a | Available log categories: `ConnectionEvents`. | +| `resourceId` | `_ResourceId` | The Azure Cache for Redis resource for which logs are enabled.| +| `operationName` | `OperationName` | The Redis operation associated with the log record. | +| `properties` | n/a | The contents of this field are described in the rows that follow. | +| `eventEpochTime` | `EventEpochTime` | The UNIX timestamp (number of seconds since January 1, 1970) when the event happened in UTC. The timestamp can be converted to datetime format using function unixtime_seconds_todatetime in log analytics workspace. | +| `clientIP` | `ClientIP` | The Redis client IP address. If using Azure storage, the IP address is IPv4 or private link IPv6 format based on cache type. If using Log Analytics, the result is always in IPv4, as a separate IPv6 field is provided. | +| n/a | `PrivateLinkIPv6` | The Redis client private link IPv6 address (only emitted if using both Private Link and log analytics). | +| `id` | `ConnectionId` | Unique connection ID assigned by Redis. | +| `eventType` | `EventType` | Type of connection event (new_conn, auth, or close_conn). | +| `eventStatus` | `EventStatus` | Results of an authentication request as a status code (only applicable for authentication event). | ++> [!NOTE] +> If private link is used, only a IPv6 address will be logged (unless you are streaming the data to log analytics). You can convert the IPv6 address to the equivalent IPv4 address by looking at the last four bytes of data in the IPv6 address. For instance, in the private link IPv6 address "fd40:8913:31:6810:6c31:200:a01:104", the last four bytes in hexadecimal are "0a", "01", "01", and "04". (Note that leading zeros are omitted after each colon.) These correspond to "10", "1", "1", and "4" in decimal, giving us the IPv4 address "10.1.1.4". +> ++#### Sample storage account log ++If you send your logs to a storage account, a log for a connection event looks like this: ++```json + { + "time": "2023-01-24T10:00:02.3680050Z", + "resourceId": "/SUBSCRIPTIONS/4A1C78C6-5CB1-422C-A34E-0DF7FCB9BD0B/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", + "category": "ConnectionEvents", + "location": "westus", + "operationName": "Microsoft.Cache/redisEnterprise/databases/ConnectionEvents/Read", + "properties": { + "eventEpochTime": 1674554402, + "id": 6185063009002, + "clientIP": "20.228.16.39", + "eventType": "new_conn" + } + } +``` ++And the log for an auth event looks like this: ++```json + { + "time": "2023-01-24T10:00:02.3680050Z", + "resourceId": "/SUBSCRIPTIONS/4A1C78C6-5CB1-422C-A34E-0DF7FCB9BD0B/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", + "category": "ConnectionEvents", + "location": "westus", + "operationName": "Microsoft.Cache/redisEnterprise/databases/ConnectionEvents/Read", + "properties": { + "eventEpochTime": 1674554402, + "id": 6185063009002, + "clientIP": "20.228.16.39", + "eventType": "auth", + "eventStatus": 8 + } + } +``` ++And the log for a disconnection event looks like this: +```json + { + "time": "2023-01-24T10:00:03.3680050Z", + "resourceId": "/SUBSCRIPTIONS/4A1C78C6-5CB1-422C-A34E-0DF7FCB9BD0B/RESOURCEGROUPS/TEST/PROVIDERS/MICROSOFT.CACHE/REDISENTERPRISE/AUDITING-SHOEBOX/DATABASES/DEFAULT", + "category": "ConnectionEvents", + "location": "westus", + "operationName": "Microsoft.Cache/redisEnterprise/databases/ConnectionEvents/Read", + "properties": { + "eventEpochTime": 1674554402, + "id": 6185063009002, + "clientIP": "20.228.16.39", + "eventType": "close_conn" + } + } +``` +++ ## Log Analytics Queries +> [!NOTE] +> For a tutorial on how to use Azure Log Analytics, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md). Remember that it may take up to 90 minutes before logs show up in Log Analtyics. +> + Here are some basic queries to use as models. +### [Queries for Basic, Standard, and Premium tiers](#tab/basic-standard-premium) + - Azure Cache for Redis client connections per hour within the specified IP address range: ```kusto ACRConnectedClientList | summarize count() by ClientIp ``` +### [Queries for Enterprise and Enterprise Flash tiers (preview)](#tab/enterprise-enterprise-flash) ++- Azure Cache for Redis connections per hour within the specified IP address range: ++```kusto +REDConnectionEvents +// For particular datetime filtering, add '| where EventTime between (StartTime .. EndTime)' +// For particular IP range filtering, add '| where ipv4_is_in_range(ClientIp, IpRange)' +// IP range can be defined like this 'let IpRange = "10.1.1.0/24";' at the top of query. +| extend EventTime = unixtime_seconds_todatetime(EventEpochTime) +| where EventType == "new_conn" +| summarize ConnectionCount = count() by TimeRange = bin(EventTime, 1h) +``` ++- Azure Cache for Redis authentication requests per hour within the specified IP address range: ++```kusto +REDConnectionEvents +| extend EventTime = unixtime_seconds_todatetime(EventEpochTime) +// For particular datetime filtering, add '| where EventTime between (StartTime .. EndTime)' +// For particular IP range filtering, add '| where ipv4_is_in_range(ClientIp, IpRange)' +// IP range can be defined like this 'let IpRange = "10.1.1.0/24";' at the top of query. +| where EventType == "auth" +| summarize AuthencationRequestsCount = count() by TimeRange = bin(EventTime, 1h) +``` ++- Unique Redis client IP addresses that have connected to the cache: ++```kusto +REDConnectionEvents +// https://docs.redis.com/latest/rs/security/audit-events/#status-result-codes +// EventStatus : +// 0 AUTHENTICATION_FAILED - Invalid username and/or password. +// 1 AUTHENTICATION_FAILED_TOO_LONG - Username or password are too long. +// 2 AUTHENTICATION_NOT_REQUIRED - Client tried to authenticate, but authentication isnΓÇÖt necessary. +// 3 AUTHENTICATION_DIRECTORY_PENDING - Attempting to receive authentication info from the directory in async mode. +// 4 AUTHENTICATION_DIRECTORY_ERROR - Authentication attempt failed because there was a directory connection error. +// 5 AUTHENTICATION_SYNCER_IN_PROGRESS - Syncer SASL handshake. Return SASL response and wait for the next request. +// 6 AUTHENTICATION_SYNCER_FAILED - Syncer SASL handshake. Returned SASL response and closed the connection. +// 7 AUTHENTICATION_SYNCER_OK - Syncer authenticated. Returned SASL response. +// 8 AUTHENTICATION_OK - Client successfully authenticated. +| where EventType == "auth" and EventStatus == 2 or EventStatus == 8 or EventStatus == 7 +| summarize count() by ClientIp +``` ++- Unsuccessful authentication attempts to the cache ++```kusto +REDConnectionEvents +// https://docs.redis.com/latest/rs/security/audit-events/#status-result-codes +// EventStatus : +// 0 AUTHENTICATION_FAILED - Invalid username and/or password. +// 1 AUTHENTICATION_FAILED_TOO_LONG - Username or password are too long. +// 2 AUTHENTICATION_NOT_REQUIRED - Client tried to authenticate, but authentication isnΓÇÖt necessary. +// 3 AUTHENTICATION_DIRECTORY_PENDING - Attempting to receive authentication info from the directory in async mode. +// 4 AUTHENTICATION_DIRECTORY_ERROR - Authentication attempt failed because there was a directory connection error. +// 5 AUTHENTICATION_SYNCER_IN_PROGRESS - Syncer SASL handshake. Return SASL response and wait for the next request. +// 6 AUTHENTICATION_SYNCER_FAILED - Syncer SASL handshake. Returned SASL response and closed the connection. +// 7 AUTHENTICATION_SYNCER_OK - Syncer authenticated. Returned SASL response. +// 8 AUTHENTICATION_OK - Client successfully authenticated. +| where EventType == "auth" and EventStatus != 2 and EventStatus != 8 and EventStatus != 7 +| project ClientIp, EventStatus, ConnectionId +``` ++ ## Next steps For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article. |
azure-functions | Create First Function Cli Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md | Title: Create a JavaScript function from the command line - Azure Functions description: Learn how to create a JavaScript function from the command line, then publish the local Node.js project to serverless hosting in Azure Functions. Previously updated : 11/18/2021 Last updated : 03/08/2023 ms.devlang: javascript +zone_pivot_groups: functions-nodejs-model # Quickstart: Create a JavaScript function in Azure from the command line - In this article, you use command-line tools to create a JavaScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. -Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. +>[!NOTE] +>The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md). ++Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account. There is also a [Visual Studio Code-based version](create-first-function-vs-code-node.md) of this article. Before you begin, you must have the following: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + The [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x.+++ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5085 or above + One of the following tools for creating Azure resources: Before you begin, you must have the following: + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later. + [Node.js](https://nodejs.org/) version 18 or 16. +++ [Node.js](https://nodejs.org/) version 18 or above. ### Prerequisite check Verify your prerequisites, which depend on whether you are using Azure CLI or Az # [Azure CLI](#tab/azure-cli) + In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.+++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above. + Run `az --version` to check that the Azure CLI version is 2.4 or later. Verify your prerequisites, which depend on whether you are using Azure CLI or Az # [Azure PowerShell](#tab/azure-powershell) + In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.+++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above. + Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. Verify your prerequisites, which depend on whether you are using Azure CLI or Az In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function. 1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj* with the specified runtime: ```console In Azure Functions, a function project is a container for one or more individual `func new` creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*. +You may find the [Azure Functions Core Tools reference](functions-core-tools-reference.md) helpful. + ### (Optional) Examine the file contents If desired, you can skip to [Run the function locally](#run-the-function-locally) and examine the file contents later. For an HTTP trigger, the function receives request data in the variable `req` as Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type [`httpTrigger`](functions-bindings-http-webhook-trigger.md) and output binding of type [`http`](functions-bindings-http-webhook-output.md). ++1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj*: ++ ```console + func init LocalFunctionProj --model V4 + ``` + You are then prompted to choose a worker runtime and a language - choose Node for the first and JavaScript for the second. ++2. Navigate into the project folder: ++ ```console + cd LocalFunctionProj + ``` ++ This folder contains various files for the project, including configurations files named *local.settings.json* and *host.json*. Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file. ++3. Add a function to your project by using the following command: ++ ```console + func new + ``` ++ Choose the template for "HTTP trigger". You can keep the default name (*httpTrigger*) or give it a new name (*HttpExample*). Your function name must be unique, or you'll be asked to confirm if your intention is to replace an existing function. ++ You can find the function you added in the *src/functions* directory. ++4. Add Azure Storage connection information in *local.settings.json*. + ```json + { + "Values": { + "AzureWebJobsStorage": "<Azure Storage connection information>", + "FUNCTIONS_WORKER_RUNTIME": "node", + "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" + } + } + ``` ++5. (Optional) If you want to learn more about a particular function, say HTTP trigger, you can run the following command: ++ ```console + func help httptrigger + ``` + [!INCLUDE [functions-run-function-test-local-cli](../../includes/functions-run-function-test-local-cli.md)] [!INCLUDE [functions-create-azure-resources-cli](../../includes/functions-create-azure-resources-cli.md)] Each binding requires a direction, a type, and a unique name. The HTTP trigger h This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it. +## Update app settings ++To enable your V4 programming model app to run in Azure, you need to add a new application setting named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file. ++Run the following command to add this setting to your new function app in Azure. Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. ++# [Azure CLI](#tab/azure-cli) ++```azurecli +az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing +``` ++# [Azure PowerShell](#tab/azure-powershell) ++```azurepowershell +Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} +``` +++ [!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)] [!INCLUDE [functions-run-remote-azure-cli](../../includes/functions-run-remote-azure-cli.md)] |
azure-functions | Create First Function Cli Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md | Title: Create a TypeScript function from the command line - Azure Functions description: Learn how to create a TypeScript function from the command line, then publish the local project to serverless hosting in Azure Functions. Previously updated : 11/18/2021 Last updated : 03/06/2023 ms.devlang: typescript +zone_pivot_groups: functions-nodejs-model # Quickstart: Create a TypeScript function in Azure from the command line - In this article, you use command-line tools to create a TypeScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. -Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. +>[!NOTE] +>The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md). ++Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account. -There is also a [Visual Studio Code-based version](create-first-function-vs-code-typescript.md) of this article. +There's also a [Visual Studio Code-based version](create-first-function-vs-code-typescript.md) of this article. ## Configure your local environment Before you begin, you must have the following: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x. ++ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x.++ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5085 or above + One of the following tools for creating Azure resources: Before you begin, you must have the following: + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later. + [Node.js](https://nodejs.org/) version 18 or 16. ++ [Node.js](https://nodejs.org/) version 18 or above. +++ [TypeScript](https://www.typescriptlang.org/) version 4+.+ ### Prerequisite check -Verify your prerequisites, which depend on whether you are using Azure CLI or Azure PowerShell for creating Azure resources: +Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources: # [Azure CLI](#tab/azure-cli) + In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.+++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above. + Run `az --version` to check that the Azure CLI version is 2.4 or later. Verify your prerequisites, which depend on whether you are using Azure CLI or Az # [Azure PowerShell](#tab/azure-powershell) + In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.+++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above. + Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. Verify your prerequisites, which depend on whether you are using Azure CLI or Az In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function. 1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj* with the specified runtime: ```console func init LocalFunctionProj --typescript ``` -1. Navigate into the project folder: +2. Navigate into the project folder: ```console cd LocalFunctionProj In Azure Functions, a function project is a container for one or more individual This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file. -1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP). +3. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP). ```console func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous" For an HTTP trigger, the function receives request data in the variable `req` of Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type [`httpTrigger`](functions-bindings-http-webhook-trigger.md) and output binding of type [`http`](functions-bindings-http-webhook-output.md). ++1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj* with the V4 programming model: ++ ```console + func init LocalFunctionProj --model V4 + ``` ++ You're then prompted to choose a worker runtime and a language - choose Node for the first and TypeScript for the second. ++2. Navigate into the project folder: ++ ```console + cd LocalFunctionProj + ``` ++ This folder contains various files for the project, including configurations files named *local.settings.json* and *host.json*. Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file. ++3. Add a function to your project by using the following command: ++ ```console + func new + ``` ++ Choose the template for "HTTP trigger". You can keep the default name (*httpTrigger*) or give it a new name (*HttpExample*). Your function name must be unique, or you'll be asked to confirm if your intention is to replace an existing function. ++ You can find the function you added in the *src/functions* directory. ++4. Add Azure Storage connection information in *local.settings.json*. + ```json + { + "Values": { + "AzureWebJobsStorage": "<Azure Storage connection information>", + "FUNCTIONS_WORKER_RUNTIME": "node", + "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" + } + } + ``` +5. (Optional) If you want to learn more about a particular function, say HTTP trigger, you can run the following command: ++ ```console + func help httptrigger + ``` + ## Run the function locally 1. Run your function by starting the local Azure Functions runtime host from the *LocalFunctionProj* folder: + ::: zone pivot="nodejs-model-v3" ```console npm install npm start ```+ ::: zone-end - Toward the end of the output, the following lines should appear: -- <pre> - ... -- Now listening on: http://0.0.0.0:7071 - Application started. Press Ctrl+C to shut down. + ::: zone pivot="nodejs-model-v4" + ```console + npm start + ``` + ::: zone-end - Http Functions: + Toward the end of the output, the following should appear: - HttpExample: [GET,POST] http://localhost:7071/api/HttpExample - ... -- </pre> +  >[!NOTE] > If HttpExample doesn't appear as shown below, you likely started the host from outside the root folder of the project. In that case, use **Ctrl**+**C** to stop the host, navigate to the project's root folder, and run the previous command again. Each binding requires a direction, a type, and a unique name. The HTTP trigger h az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME> ``` - The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It is recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`. + The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It's recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`. # [Azure PowerShell](#tab/azure-powershell) Each binding requires a direction, a type, and a unique name. The HTTP trigger h New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 18 -FunctionsVersion 4 -Location '<REGION>' ``` - The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It is recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`. + The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It's recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`. + In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app. This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it. +## Update app settings ++To enable your V4 programming model app to run in Azure, you need to add a new application setting named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file. ++Run the following command to add this setting to your new function app in Azure. Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. ++# [Azure CLI](#tab/azure-cli) ++```azurecli +az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing +``` ++# [Azure PowerShell](#tab/azure-powershell) ++```azurepowershell +Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} +``` +++ ## Deploy the function project to Azure Before you use Core Tools to deploy your project to Azure, you create a production-ready build of JavaScript files from the TypeScript source files. |
azure-functions | Create First Function Vs Code Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md | Title: Create a JavaScript function using Visual Studio Code - Azure Functions description: Learn how to create a JavaScript function, then publish the local Node.js project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 06/07/2022 Last updated : 02/06/2023 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B adobe-target-content: ./create-first-function-vs-code-node_uiex ms.devlang: javascript +zone_pivot_groups: functions-nodejs-model # Quickstart: Create a JavaScript function in Azure using Visual Studio Code - Use Visual Studio Code to create a JavaScript function that responds to HTTP requests. Test the code locally, then deploy it to the serverless environment of Azure Functions. -Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. +>[!NOTE] +>The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md). ++Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account. There's also a [CLI-based version](create-first-function-cli-node.md) of this article. There's also a [CLI-based version](create-first-function-cli-node.md) of this ar Before you get started, make sure you have the following requirements in place: [!INCLUDE [functions-requirements-visual-studio-code-node](../../includes/functions-requirements-visual-studio-code-node.md)] ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions pr :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window."::: -1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace. +2. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace. -1. Provide the following information at the prompts: +3. Provide the following information at the prompts: |Prompt|Selection| |--|--| |**Select a language for your function project**|Choose `JavaScript`.|+ |**Select a JavaScript programming model**|Choose `Model V3`| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**|Choose `Add to workspace`.| Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=javascript#generated-project-files). +3. Provide the following information at the prompts: ++ |Prompt|Selection| + |--|--| + |**Select a language for your function project**|Choose `JavaScript`.| + |**Select a JavaScript programming model**|Choose `Model V4 (Preview)`| + |**Select a template for your project's first function**|Choose `HTTP trigger`.| + |**Provide a function name**|Type `HttpExample`.| + |**Select how you would like to open your project**|Choose `Add to workspace`| ++ Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions JavaScript developer guide](functions-reference-node.md). [!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)] After you've verified that the function runs correctly on your local computer, i [!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)] +## Create the function app in Azure +++## Update app settings ++To enable your V4 programming model app to run in Azure, you need to add a new application setting named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file. ++1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. ++1. Choose your new function app, type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>. ++1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. ++## Deploy the project to Azure + [!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)] ## Change the code and redeploy to Azure 1. In Visual Studio Code in the Explorer view, select the `./HttpExample/index.js` file. After you've verified that the function runs correctly on your local computer, i ``` 1. [Redeploy the function](#deploy-the-project-to-azure) to Azure. ## Troubleshooting Use the table below to resolve the most common issues encountered when using thi |Problem|Solution| |--|--| |Can't create a local function project?|Make sure you have the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed.|-|Can't run the function locally?|Make sure you have the [Azure Functions Core Tools installed](functions-run-local.md?tabs=node) installed. <br/>When running on Windows, make sure that the default terminal shell for Visual Studio Code isn't set to WSL Bash.| +|Can't run the function locally?|Make sure you have the latest version of [Azure Functions Core Tools installed](functions-run-local.md?tabs=node) installed. <br/>When running on Windows, make sure that the default terminal shell for Visual Studio Code isn't set to WSL Bash.| |Can't deploy function to Azure?|Review the Output for error information. The bell icon in the lower right corner is another way to view the output. Did you publish to an existing function app? That action overwrites the content of that app in Azure.| |Couldn't run the cloud-based Function app?|Remember to use the query string to send in parameters.| |
azure-functions | Create First Function Vs Code Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md | Title: Create a TypeScript function using Visual Studio Code - Azure Functions description: Learn how to create a TypeScript function, then publish the local Node.js project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 06/18/2022 Last updated : 02/06/2023 ms.devlang: typescript +zone_pivot_groups: functions-nodejs-model # Quickstart: Create a function in Azure with TypeScript using Visual Studio Code - In this article, you use Visual Studio Code to create a TypeScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. -Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. +>[!NOTE] +>The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md). ++Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account. There's also a [CLI-based version](create-first-function-cli-typescript.md) of this article. Before you get started, make sure you have the following requirements in place: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + [Node.js 18.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/). Use the `node --version` command to check your version. ++ [Node.js 18.x](https://nodejs.org/en/download/releases/) or above. Use the `node --version` command to check your version. +++ [TypeScript 4.x](https://www.typescriptlang.org/). Use the `tsc -v` command to check your version. + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). -+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. ++ The [Azure Functions extension v1.10.4](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) or above for Visual Studio Code. + [Azure Functions Core Tools 4.x](functions-run-local.md#install-the-azure-functions-core-tools).++ [Azure Functions Core Tools v4.0.5085 or above](functions-run-local.md#install-the-azure-functions-core-tools). ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions pr :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window."::: -1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace. +2. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace. -1. Provide the following information at the prompts: +3. Provide the following information at the prompts: |Prompt|Selection| |--|--| |**Select a language for your function project**|Choose `TypeScript`.|+ |**Select a TypeScript programming model**|Choose `Model V3`| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**|Choose `Add to workspace`.| Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=typescript#generated-project-files).+3. Provide the following information at the prompts: ++ |Prompt|Selection| + |--|--| + |**Select a language for your function project**|Choose `TypeScript`.| + |**Select a TypeScript programming model**|Choose `Model V4 (Preview)`| + |**Select a template for your project's first function**|Choose `HTTP trigger`.| + |**Provide a function name**|Type `HttpExample`.| + |**Select how you would like to open your project**|Choose `Add to workspace`| ++ Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions TypeScript developer guide](functions-reference-node.md). [!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)] After you've verified that the function runs correctly on your local computer, i [!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)] +## Create the function app in Azure +++## Update app settings ++To enable your V4 programming model app to run in Azure, you need to add a new application setting named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file. ++1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. ++1. Choose your new function app, type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>. ++1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. ++## Deploy the project to Azure + [!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)] |
azure-functions | Durable Functions Cloud Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-cloud-backup.md | Title: Fan-out/fan-in scenarios in Durable Functions - Azure description: Learn how to implement a fan-out-fan-in scenario in the Durable Functions extension for Azure Functions. Previously updated : 11/02/2019 Last updated : 02/14/2023 +> [!NOTE] +> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). +> +> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. + [!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)] ## Scenario overview Notice the `await Task.WhenAll(tasks);` line. All the individual calls to the `E After awaiting from `Task.WhenAll`, we know that all function calls have completed and have returned values back to us. Each call to `E2_CopyFileToBlob` returns the number of bytes uploaded, so calculating the sum total byte count is a matter of adding all those return values together. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) The function uses the standard *function.json* for orchestrator functions. Notice the `yield context.df.Task.all(tasks);` line. All the individual calls to After yielding from `context.df.Task.all`, we know that all function calls have completed and have returned values back to us. Each call to `E2_CopyFileToBlob` returns the number of bytes uploaded, so calculating the sum total byte count is a matter of adding all those return values together. +# [JavaScript (PM4)](#tab/javascript-v4) ++Here is the code that implements the orchestrator function: +++Notice the `yield context.df.Task.all(tasks);` line. All the individual calls to the `copyFileToBlob` function were *not* yielded, which allows them to run in parallel. When we pass this array of tasks to `context.df.Task.all`, we get back a task that won't complete *until all the copy operations have completed*. If you're familiar with [`Promise.all`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) in JavaScript, then this is not new to you. The difference is that these tasks could be running on multiple virtual machines concurrently, and the Durable Functions extension ensures that the end-to-end execution is resilient to process recycling. ++> [!NOTE] +> Although Tasks are conceptually similar to JavaScript promises, orchestrator functions should use `context.df.Task.all` and `context.df.Task.any` instead of `Promise.all` and `Promise.race` to manage task parallelization. ++After yielding from `context.df.Task.all`, we know that all function calls have completed and have returned values back to us. Each call to `copyFileToBlob` returns the number of bytes uploaded, so calculating the sum total byte count is a matter of adding all those return values together. + # [Python](#tab/python) The function uses the standard *function.json* for orchestrator functions. The helper activity functions, as with other samples, are just regular functions [!code-csharp[Main](~/samples-durable-functions/samples/precompiled/BackupSiteContent.cs?range=44-54)] -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) The *function.json* file for `E2_GetFileList` looks like the following: And here is the implementation: The function uses the `readdirp` module (version 2.x) to recursively read the directory structure. +# [JavaScript (PM4)](#tab/javascript-v4) ++Here is the implementation of the `getFileList` activity function: +++The function uses the `readdirp` module (version `3.x`) to recursively read the directory structure. + # [Python](#tab/python) The *function.json* file for `E2_GetFileList` looks like the following: And here is the implementation: The function uses some advanced features of Azure Functions bindings (that is, the use of the [`Binder` parameter](../functions-dotnet-class-library.md#binding-at-runtime)), but you don't need to worry about those details for the purpose of this walkthrough. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) The *function.json* file for `E2_CopyFileToBlob` is similarly simple: The JavaScript implementation uses the [Azure Storage SDK for Node](https://gith :::code language="javascript" source="~/azure-functions-durable-js/samples/E2_CopyFileToBlob/index.js"::: +# [JavaScript (PM4)](#tab/javascript-v4) ++The JavaScript implementation of `copyFileToBlob` uses an Azure Storage output binding to upload the files to Azure Blob storage. ++ # [Python](#tab/python) The *function.json* file for `E2_CopyFileToBlob` is similarly simple: |
azure-functions | Durable Functions Error Handling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-error-handling.md | Title: Handling errors in Durable Functions - Azure description: Learn how to handle errors in the Durable Functions extension for Azure Functions. Previously updated : 12/07/2022 Last updated : 02/14/2023 ms.devlang: csharp, javascript, powershell, python, java ms.devlang: csharp, javascript, powershell, python, java Durable Function orchestrations are implemented in code and can use the programming language's built-in error-handling features. There really aren't any new concepts you need to learn to add error handling and compensation into your orchestrations. However, there are a few behaviors that you should be aware of. +> [!NOTE] +> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). +> +> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. + ## Errors in activity functions Any exception that is thrown in an activity function is marshaled back to the orchestrator function and thrown as a `FunctionFailedException`. You can write error handling and compensation code that suits your needs in the orchestrator function. public static async Task Run( } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); -module.exports = df.orchestrator(function*(context) { +module.exports = df.orchestrator(function* (context) { const transferDetails = context.df.getInput(); - yield context.df.callActivity("DebitAccount", - { - account = transferDetails.sourceAccount, - amount = transferDetails.amount, - } - ); + yield context.df.callActivity("DebitAccount", { + account: transferDetails.sourceAccount, + amount: transferDetails.amount, + }); try {- yield context.df.callActivity("CreditAccount", - { - account = transferDetails.destinationAccount, - amount = transferDetails.amount, - } - ); + yield context.df.callActivity("CreditAccount", { + account: transferDetails.destinationAccount, + amount: transferDetails.amount, + }); + } catch (error) { + // Refund the source account. + // Another try/catch could be used here based on the needs of the application. + yield context.df.callActivity("CreditAccount", { + account: transferDetails.sourceAccount, + amount: transferDetails.amount, + }); }- catch (error) { +}) +``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); ++df.app.orchestration("transferFunds", function* (context) { + const transferDetails = context.df.getInput(); ++ yield context.df.callActivity("debitAccount", { + account: transferDetails.sourceAccount, + amount: transferDetails.amount, + }); ++ try { + yield context.df.callActivity("creditAccount", { + account: transferDetails.destinationAccount, + amount: transferDetails.amount, + }); + } catch (error) { // Refund the source account. // Another try/catch could be used here based on the needs of the application.- yield context.df.callActivity("CreditAccount", - { - account = transferDetails.sourceAccount, - amount = transferDetails.amount, - } - ); + yield context.df.callActivity("creditAccount", { + account: transferDetails.sourceAccount, + amount: transferDetails.amount, + }); } }); ```+ # [Python](#tab/python) ```python public static async Task Run([OrchestrationTrigger] TaskOrchestrationContext con } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { }); ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); ++df.app.orchestration("callActivityWithRetry", function* (context) { + const firstRetryIntervalInMilliseconds = 5000; + const maxNumberOfAttempts = 3; ++ const retryOptions = new df.RetryOptions(firstRetryIntervalInMilliseconds, maxNumberOfAttempts); ++ yield context.df.callActivityWithRetry("flakyFunction", retryOptions); ++ // ... +}); +``` + # [Python](#tab/python) ```python catch (TaskFailedException) } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ++JavaScript doesn't currently support custom retry handlers. However, you still have the option of implementing retry logic directly in the orchestrator function using loops, exception handling, and timers for injecting delays between retries. ++# [JavaScript (PM4)](#tab/javascript-v4) JavaScript doesn't currently support custom retry handlers. However, you still have the option of implementing retry logic directly in the orchestrator function using loops, exception handling, and timers for injecting delays between retries. public static async Task<bool> Run([OrchestrationTrigger] TaskOrchestrationConte } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { }); ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); +const { DateTime } = require("luxon"); ++df.app.orchestration("timerOrchestrator", function* (context) { + const deadline = DateTime.fromJSDate(context.df.currentUtcDateTime).plus({ seconds: 30 }); ++ const activityTask = context.df.callActivity("flakyFunction"); + const timeoutTask = context.df.createTimer(deadline.toJSDate()); ++ const winner = yield context.df.Task.any([activityTask, timeoutTask]); + if (winner === activityTask) { + // success case + timeoutTask.cancel(); + return true; + } else { + // timeout case + return false; + } +}); +``` + # [Python](#tab/python) ```python |
azure-functions | Durable Functions Orchestrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-orchestrations.md | Title: Durable Orchestrations - Azure Functions description: Introduction to the orchestration feature for Azure Durable Functions. Previously updated : 12/07/2022 Last updated : 02/14/2023 ms.devlang: csharp, javascript, powershell, python, java #Customer intent: As a developer, I want to understand durable orchestrations so that I can use them effectively in my applications. When an orchestration function is given more work to do (for example, a response The event-sourcing behavior of the Durable Task Framework is closely coupled with the orchestrator function code you write. Suppose you have an activity-chaining orchestrator function, like the following orchestrator function: +> [!NOTE] +> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). +> +> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. + # [C# (InProc)](#tab/csharp-inproc) ```csharp public static async Task<List<string>> Run( } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { }); ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); +const helloActivityName = "sayHello"; ++df.app.orchestration("helloSequence", function* (context) { + const output = []; + output.push(yield context.df.callActivity(helloActivityName, "Tokyo")); + output.push(yield context.df.callActivity(helloActivityName, "Seattle")); + output.push(yield context.df.callActivity(helloActivityName, "Cairo")); ++ // returns ["Hello Tokyo!", "Hello Seattle!", "Hello Cairo!"] + return output; +}); +``` + # [Python](#tab/python) ```python public static async Task CheckSiteAvailable( The feature is not currently supported in dotnet-isolated worker. Instead, write an activity which performs the desired HTTP call. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { }); ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); ++df.app.orchestration("checkSiteAvailable", function* (context) { + const url = context.df.getInput(); + var res = yield context.df.callHttp({ method: "GET", url }); + if (res.statusCode >= 400) { + // handling of error codes goes here + } +}); +``` + # [Python](#tab/python) ```python public static async Task<object> Mapper( } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) #### Orchestrator module.exports = df.orchestrator(function*(context) { const weather = yield context.df.callActivity("GetWeather", location); // ...-}; +}); ``` #### `GetWeather` Activity module.exports = async function (context, location) { }; ``` +# [JavaScript (PM4)](#tab/javascript-v4) +++```javascript +const getWeatherActivityName = "getWeather"; ++df.app.orchestration("getWeatherOrchestrator", function* (context) { + const location = { + city: "Seattle", + state: "WA", + }; + const weather = yield context.df.callActivity(getWeatherActivityName, location); ++ // ... +}); ++df.app.activity(getWeatherActivityName, async function (location) { + const { city, state } = location; // destructure properties into variables ++ // ... +}); +``` + # [Python](#tab/python) #### Orchestrator |
azure-functions | Durable Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md | Title: Durable Functions Overview - Azure description: Introduction to the Durable Functions extension for Azure Functions. Previously updated : 12/07/2022 Last updated : 02/13/2023 Durable Functions is designed to work with all Azure Functions programming langu | Language stack | Azure Functions Runtime versions | Language worker version | Minimum bundles version | | - | - | - | - |-| .NET / C# / F# | Functions 1.0+ | In-process <br/> Out-of-process| n/a | -| JavaScript/TypeScript | Functions 2.0+ | Node 8+ | 2.x bundles | +| .NET / C# / F# | Functions 1.0+ | In-process <br/> Out-of-process | n/a | +| JavaScript/TypeScript (V3 prog. model) | Functions 2.0+ | Node 8+ | 2.x bundles | +| JavaScript/TypeScript (V4 prog. model) | Functions 4.16.5+ | Node 18+ | 3.15+ bundles | | Python | Functions 2.0+ | Python 3.7+ | 2.x bundles | | Python (V2 prog. model) | Functions 4.0+ | Python 3.7+ | 3.15+ bundles | | PowerShell | Functions 3.0+ | PowerShell 7+ | 2.x bundles | | Java | Functions 4.0+ | Java 8+ | 4.x bundles | > [!NOTE]-> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for Python programmers. To learn more, see Azure Functions Python [developer guide](../functions-reference-python.md?pivots=python-mode-decorators). +> The new programming models for authoring Functions in Python (V2) and Node.js (V4) are currently in preview. Compared to the current models, the new experiences are designed to be more idiomatic and intuitive for Python and JavaScript/TypeScript developers. To learn more, see Azure Functions [Python developer guide](../functions-reference-python.md?pivots=python-mode-decorators) and [Node.js developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). >-> In the following code snippets, Python (PM2) denotes programming model V2, the new experience. +> In the following code snippets, Python (PM2) denotes programming model V2, and JavaScript (PM4) denotes programming model V4, the new experiences. Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md). public static async Task<object> Run( You can use the `context` parameter to invoke other functions by name, pass parameters, and return function output. Each time the code calls `await`, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding `await` call. For more information, see the next section, Pattern #2: Fan out/fan in. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); You can use the `context.df` object to invoke other functions by name, pass para > [!NOTE] > The `context` object in JavaScript represents the entire [function context](../functions-reference-node.md#context-object). Access the Durable Functions context using the `df` property on the main context. +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); ++df.app.orchestration("chainingDemo", function* (context) { + try { + const x = yield context.df.callActivity("F1"); + const y = yield context.df.callActivity("F2", x); + const z = yield context.df.callActivity("F3", y); + return yield context.df.callActivity("F4", z); + } catch (error) { + // Error handling or compensation goes here. + } +}); +``` ++You can use the `context.df` object to invoke other functions by name, pass parameters, and return function output. Each time the code calls `yield`, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding `yield` call. For more information, see the next section, Pattern #2: Fan out/fan in. ++> [!NOTE] +> The `context` object in JavaScript represents the entire [function context](../functions-reference-node.md#context-object). Access the Durable Functions context using the `df` property on the main context. + # [Python](#tab/python) ```python The fan-out work is distributed to multiple instances of the `F2` function. The The automatic checkpointing that happens at the `await` call on `Task.WhenAll` ensures that a potential midway crash or reboot doesn't require restarting an already completed task. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); The fan-out work is distributed to multiple instances of the `F2` function. The The automatic checkpointing that happens at the `yield` call on `context.df.Task.all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task. +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); ++df.app.orchestration("fanOutFanInDemo", function* (context) { + const parallelTasks = []; ++ // Get a list of N work items to process in parallel. + const workBatch = yield context.df.callActivity("F1"); + for (let i = 0; i < workBatch.length; i++) { + parallelTasks.push(context.df.callActivity("F2", workBatch[i])); + } ++ yield context.df.Task.all(parallelTasks); ++ // Aggregate all N outputs and send the result to F3. + const sum = parallelTasks.reduce((prev, curr) => prev + curr, 0); + yield context.df.callActivity("F3", sum); +}); +``` ++The fan-out work is distributed to multiple instances of the `F2` function. The work is tracked by using a dynamic list of tasks. `context.df.Task.all` API is called to wait for all the called functions to finish. Then, the `F2` function outputs are aggregated from the dynamic task list and passed to the `F3` function. ++The automatic checkpointing that happens at the `yield` call on `context.df.Task.all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task. ++ # [Python](#tab/python) ```python The async HTTP API pattern addresses the problem of coordinating the state of lo  -Durable Functions provides **built-in support** for this pattern, simplifying or even removing the code you need to write to interact with long-running function executions. For example, the Durable Functions quickstart samples ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), and [Java](quickstart-java.md)) show a simple REST command that you can use to start new orchestrator function instances. After an instance starts, the extension exposes webhook HTTP APIs that query the orchestrator function status. +Durable Functions provides **built-in support** for this pattern, simplifying or even removing the code you need to write to interact with long-running function executions. For example, the Durable Functions quickstart samples ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [TypeScript](quickstart-ts-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), and [Java](quickstart-java.md)) show a simple REST command that you can use to start new orchestrator function instances. After an instance starts, the extension exposes webhook HTTP APIs that query the orchestrator function status. The following example shows REST commands that start an orchestrator and query its status. For clarity, some protocol details are omitted from the example. Content-Type: application/json Because the Durable Functions runtime manages state for you, you don't need to implement your own status-tracking mechanism. -The Durable Functions extension exposes built-in HTTP APIs that manage long-running orchestrations. You can alternatively implement this pattern yourself by using your own function triggers (such as HTTP, a queue, or Azure Event Hubs) and the [orchestration client binding](durable-functions-bindings.md#orchestration-client). For example, you might use a queue message to trigger termination. Or, you might use an HTTP trigger that's protected by an Azure Active Directory authentication policy instead of the built-in HTTP APIs that use a generated key for authentication. +The Durable Functions extension exposes built-in HTTP APIs that manage long-running orchestrations. You can alternatively implement this pattern yourself by using your own function triggers (such as HTTP, a queue, or Azure Event Hubs) and the [durable client binding](durable-functions-bindings.md#orchestration-client). For example, you might use a queue message to trigger termination. Or, you might use an HTTP trigger that's protected by an Azure Active Directory authentication policy instead of the built-in HTTP APIs that use a generated key for authentication. For more information, see the [HTTP features](durable-functions-http-features.md) article, which explains how you can expose asynchronous, long-running processes over HTTP using the Durable Functions extension. public static async Task Run( } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { }); ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); +const { DateTime } = require("luxon"); ++df.app.orchestration("monitorDemo", function* (context) { + const jobId = context.df.getInput(); + const pollingInterval = getPollingInterval(); + const expiryTime = getExpiryTime(); ++ while (DateTime.fromJSDate(context.df.currentUtcDateTime) < DateTime.fromJSDate(expiryTime)) { + const jobStatus = yield context.df.callActivity("GetJobStatus", jobId); + if (jobStatus === "Completed") { + // Perform an action when a condition is met. + yield context.df.callActivity("SendAlert", machineId); + break; + } ++ // Orchestration sleeps until this time. + const nextCheck = DateTime.fromJSDate(context.df.currentUtcDateTime).plus({ + seconds: pollingInterval, + }); + yield context.df.createTimer(nextCheck.toJSDate()); + } ++ // Perform more work here, or let the orchestration end. +}); +``` + # [Python](#tab/python) ```python public static async Task Run( To create the durable timer, call `context.CreateTimer`. The notification is received by `context.WaitForExternalEvent`. Then, `Task.WhenAny` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout). -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { const durableTimeout = context.df.createTimer(dueTime.toDate()); const approvalEvent = context.df.waitForExternalEvent("ApprovalEvent");- if (approvalEvent === yield context.df.Task.any([approvalEvent, durableTimeout])) { + const winningEvent = yield context.df.Task.any([approvalEvent, durableTimeout]); + if (winningEvent === approvalEvent) { durableTimeout.cancel(); yield context.df.callActivity("ProcessApproval", approvalEvent.result); } else { module.exports = df.orchestrator(function*(context) { To create the durable timer, call `context.df.createTimer`. The notification is received by `context.df.waitForExternalEvent`. Then, `context.df.Task.any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout). +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); +const { DateTime } = require("luxon"); ++df.app.orchestration("humanInteractionDemo", function* (context) { + yield context.df.callActivity("RequestApproval"); ++ const dueTime = DateTime.fromJSDate(context.df.currentUtcDateTime).plus({ hours: 72 }); + const durableTimeout = context.df.createTimer(dueTime.toJSDate()); ++ const approvalEvent = context.df.waitForExternalEvent("ApprovalEvent"); + const winningEvent = yield context.df.Task.any([approvalEvent, durableTimeout]); + if (winningEvent === approvalEvent) { + durableTimeout.cancel(); + yield context.df.callActivity("ProcessApproval", approvalEvent.result); + } else { + yield context.df.callActivity("Escalate"); + } +}); +``` ++To create the durable timer, call `context.df.createTimer`. The notification is received by `context.df.waitForExternalEvent`. Then, `context.df.Task.any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout). ++ # [Python](#tab/python) ```python public static async Task Run( } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = async function (context) { }; ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); +const { app } = require("@azure/functions"); ++app.get("raiseEventToOrchestration", async function (request, context) { + const instanceId = await request.text(); + const client = df.getClient(context); + const isApproved = true; + await client.raiseEvent(instanceId, "ApprovalEvent", isApproved); +}); +``` + # [Python](#tab/python) ```python public class Counter Durable entities are currently not supported in the .NET-isolated worker. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.entity(function(context) { }); ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); ++df.app.entity("entityDemo", function (context) { + const currentValue = context.df.getState(() => 0); + switch (context.df.operationName) { + case "add": + const amount = context.df.getInput(); + context.df.setState(currentValue + amount); + break; + case "reset": + context.df.setState(0); + break; + case "get": + context.df.return(currentValue); + break; + } +}); +``` + # [Python](#tab/python) ```python public static async Task Run( Durable entities are currently not supported in the .NET-isolated worker. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions");+const { app } = require("@azure/functions"); module.exports = async function (context) { const client = df.getClient(context); const entityId = new df.EntityId("Counter", "myCounter");- await context.df.signalEntity(entityId, "add", 1); + await client.signalEntity(entityId, "add", 1); }; ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); +const { app } = require("@azure/functions"); ++app.get("signalEntityDemo", async function (request, context) { + const client = df.getClient(context); + const entityId = new df.EntityId("Counter", "myCounter"); + await client.signalEntity(entityId, "add", 1); +}); +``` + # [Python](#tab/python) ```python You can get started with Durable Functions in under 10 minutes by completing one * [C# using Visual Studio 2019](durable-functions-create-first-csharp.md) * [JavaScript using Visual Studio Code](quickstart-js-vscode.md)+* [TypeScript using Visual Studio Code](quickstart-ts-vscode.md) * [Python using Visual Studio Code](quickstart-python-vscode.md) * [PowerShell using Visual Studio Code](quickstart-powershell-vscode.md) * [Java using Maven](quickstart-java.md) |
azure-functions | Durable Functions Phone Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-phone-verification.md | This sample demonstrates how to build a [Durable Functions](durable-functions-ov This sample implements an SMS-based phone verification system. These types of flows are often used when verifying a customer's phone number or for multi-factor authentication (MFA). It is a powerful example because the entire implementation is done using a couple small functions. No external data store, such as a database, is required. +> [!NOTE] +> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). +> +> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. + [!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)] ## Scenario overview This article walks through the following functions in the sample app: > [!NOTE] > It may not be obvious at first, but this orchestrator does not violate the [deterministic orchestration constraint](durable-functions-code-constraints.md). It is deterministic because the `CurrentUtcDateTime` property is used to calculate the timer expiration time, and it returns the same value on every replay at this point in the orchestrator code. This behavior is important to ensure that the same `winner` results from every repeated call to `Task.WhenAny`. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) The **E4_SmsPhoneVerification** function uses the standard *function.json* for orchestrator functions. Here is the code that implements the function: > [!NOTE] > It may not be obvious at first, but this orchestrator does not violate the [deterministic orchestration constraint](durable-functions-code-constraints.md). It is deterministic because the `currentUtcDateTime` property is used to calculate the timer expiration time, and it returns the same value on every replay at this point in the orchestrator code. This behavior is important to ensure that the same `winner` results from every repeated call to `context.df.Task.any`. +# [JavaScript (PM4)](#tab/javascript-v4) ++Here is the code that implements the `smsPhoneVerification` orchestration function: ++ # [Python](#tab/python) The **E4_SmsPhoneVerification** function uses the standard *function.json* for orchestrator functions. The **E4_SendSmsChallenge** function uses the Twilio binding to send the SMS mes > [!NOTE] > You must first install the `Microsoft.Azure.WebJobs.Extensions.Twilio` Nuget package for Functions to run the sample code. Don't also install the main [Twilio nuget package](https://www.nuget.org/packages/Twilio/) because this can cause versioning problems that result in build errors. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) The *function.json* is defined as follows: And here is the code that generates the four-digit challenge code and sends the :::code language="javascript" source="~/azure-functions-durable-js/samples/E4_SendSmsChallenge/index.js"::: +# [JavaScript (PM4)](#tab/javascript-v4) ++Here is the code that generates the four-digit challenge code and sends the SMS message: ++ # [Python](#tab/python) The *function.json* is defined as follows: Location: http://{host}/runtime/webhooks/durabletask/instances/741c65651d4c40cea The orchestrator function receives the supplied phone number and immediately sends it an SMS message with a randomly generated 4-digit verification code — for example, *2168*. The function then waits 90 seconds for a response. -To reply with the code, you can use [`RaiseEventAsync` (.NET) or `raiseEvent` (JavaScript)](durable-functions-instance-management.md) inside another function or invoke the **sendEventUrl** HTTP POST webhook referenced in the 202 response above, replacing `{eventName}` with the name of the event, `SmsChallengeResponse`: +To reply with the code, you can use [`RaiseEventAsync` (.NET) or `raiseEvent` (JavaScript/TypeScript)](durable-functions-instance-management.md) inside another function or invoke the **sendEventPostUri** HTTP POST webhook referenced in the 202 response above, replacing `{eventName}` with the name of the event, `SmsChallengeResponse`: ``` POST http://{host}/runtime/webhooks/durabletask/instances/741c65651d4c40cea29acdd5bb47baf1/raiseEvent/SmsChallengeResponse?taskHub=DurableFunctionsHub&connection=Storage&code={systemKey} Content-Length: 145 ## Next steps -This sample has demonstrated some of the advanced capabilities of Durable Functions, notably `WaitForExternalEvent` and `CreateTimer` APIs. You've seen how these can be combined with `Task.WaitAny` to implement a reliable timeout system, which is often useful for interacting with real people. You can learn more about how to use Durable Functions by reading a series of articles that offer in-depth coverage of specific topics. +This sample has demonstrated some of the advanced capabilities of Durable Functions, notably `WaitForExternalEvent` and `CreateTimer` APIs. You've seen how these can be combined with `Task.WaitAny` (C#)/`context.df.Task.any` (JavaScript/TypeScript)/`context.task_any` (Python) to implement a reliable timeout system, which is often useful for interacting with real people. You can learn more about how to use Durable Functions by reading a series of articles that offer in-depth coverage of specific topics. > [!div class="nextstepaction"] > [Go to the first article in the series](durable-functions-bindings.md) |
azure-functions | Durable Functions Sequence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sequence.md | ms.devlang: csharp, javascript, python # Function chaining in Durable Functions - Hello sequence sample -Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function. This article describes the chaining sequence that you create when you complete the Durable Functions quickstart ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), or [Java](quickstart-java.md)). For more information about Durable Functions, see [Durable Functions overview](durable-functions-overview.md). +Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function. This article describes the chaining sequence that you create when you complete the Durable Functions quickstart ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [TypeScript](quickstart-ts-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), or [Java](quickstart-java.md)). For more information about Durable Functions, see [Durable Functions overview](durable-functions-overview.md). [!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)] +> [!NOTE] +> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). +> +> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. + ## The functions This article explains the following functions in the sample app: All C# orchestration functions must have a parameter of type `DurableOrchestrati The code calls `E1_SayHello` three times in sequence with different parameter values. The return value of each call is added to the `outputs` list, which is returned at the end of the function. -# [JavaScript](#tab/javascript) --> [!NOTE] -> JavaScript Durable Functions are available for the Functions 3.0 runtime only. +# [JavaScript (PM3)](#tab/javascript-v3) #### function.json All JavaScript orchestration functions must include the [`durable-functions` mod The `context` object contains a `df` durable orchestration context object that lets you call other *activity* functions and pass input parameters using its `callActivity` method. The code calls `E1_SayHello` three times in sequence with different parameter values, using `yield` to indicate the execution should wait on the async activity function calls to be returned. The return value of each call is added to the `outputs` array, which is returned at the end of the function. +# [JavaScript (PM4)](#tab/javascript-v4) +++All JavaScript orchestration functions must include the [`durable-functions` module](https://www.npmjs.com/package/durable-functions). This module enables you to write Durable Functions in JavaScript. To use the V4 node programming model, you need to install the preview `v3.x` version of `durable-functions`. ++There are two significant differences between an orchestrator function and other JavaScript functions: ++1. The orchestrator function is a [generator function](/scripting/javascript/advanced/iterators-and-generators-javascript). +2. The function must be synchronous. The function should simply 'return'. ++The `context` object contains a `df` durable orchestration context object that lets you call other *activity* functions and pass input parameters using its `callActivity` method. The code calls `sayHello` three times in sequence with different parameter values, using `yield` to indicate the execution should wait on the async activity function calls to be returned. The return value of each call is added to the `outputs` array, which is returned at the end of the function. + # [Python](#tab/python) > [!NOTE] Instead of binding to an `IDurableActivityContext`, you can bind directly to the [!code-csharp[Main](~/samples-durable-functions/samples/precompiled/HelloSequence.cs?range=34-38)] -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) #### E1_SayHello/function.json The implementation of `E1_SayHello` is a relatively trivial string formatting op Unlike the orchestration function, an activity function needs no special setup. The input passed to it by the orchestrator function is located on the `context.bindings` object under the name of the `activityTrigger` binding - in this case, `context.bindings.name`. The binding name can be set as a parameter of the exported function and accessed directly, which is what the sample code does. +# [JavaScript (PM4)](#tab/javascript-v4) ++The implementation of `sayHello` is a relatively trivial string formatting operation. +++Unlike the orchestration function, an activity function needs no special setup. The input passed to it by the orchestrator function is the first argument to the function. The second argument is the invocation context, which is not used in this example. ++ # [Python](#tab/python) #### E1_SayHello/function.json You can start an instance of orchestrator function using a client function. You To interact with orchestrators, the function must include a `DurableClient` input binding. You use the client to start an orchestration. It can also help you return an HTTP response containing URLs for checking the status of the new orchestration. -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) #### HttpStart/function.json To interact with orchestrators, the function must include a `durableClient` inpu Use `df.getClient` to obtain a `DurableOrchestrationClient` object. You use the client to start an orchestration. It can also help you return an HTTP response containing URLs for checking the status of the new orchestration. +# [JavaScript (PM4)](#tab/javascript-v4) +++To manage and interact with orchestrators, the function needs a `durableClient` input binding. This binding needs to be specified in the `extraInputs` argument when registering the function. A `durableClient` input can be obtained by calling `df.input.durableClient()`. ++Use `df.getClient` to obtain a `DurableClient` object. You use the client to start an orchestration. It can also help you return an HTTP response containing URLs for checking the status of the new orchestration. + # [Python](#tab/python) #### HttpStart/function.json |
azure-functions | Durable Functions Sub Orchestrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sub-orchestrations.md | Title: Sub-orchestrations for Durable Functions - Azure description: How to call orchestrations from orchestrations in the Durable Functions extension for Azure Functions. Previously updated : 12/07/2022 Last updated : 02/14/2023 Sub-orchestrator functions behave just like activity functions from the caller's > [!NOTE] > Sub-orchestrations are not yet supported in PowerShell. +> [!NOTE] +> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). +> +> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. + ## Example The following example illustrates an IoT ("Internet of Things") scenario where there are multiple devices that need to be provisioned. The following function represents the provisioning workflow that needs to be executed for each device: public static async Task DeviceProvisioningOrchestration( } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { // Step 4: ... }); ```+# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); ++df.app.orchestration("deviceProvisioningOrchestration", function* (context) { + const deviceId = context.df.getInput(); ++ // Step 1: Create an installation package in blob storage and return a SAS URL. + const sasUrl = yield context.df.callActivity("createInstallationPackage", deviceId); ++ // Step 2: Notify the device that the installation package is ready. + yield context.df.callActivity("sendPackageUrlToDevice", { id: deviceId, url: sasUrl }); ++ // Step 3: Wait for the device to acknowledge that it has downloaded the new package. + yield context.df.waitForExternalEvent("downloadCompletedAck"); ++ // Step 4: ... +}); +``` # [Python](#tab/python) public static async Task ProvisionNewDevices( } ``` -# [JavaScript](#tab/javascript) +# [JavaScript (PM3)](#tab/javascript-v3) ```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { }); ``` +# [JavaScript (PM4)](#tab/javascript-v4) ++```javascript +const df = require("durable-functions"); ++df.app.orchestration("provisionNewDevices", function* (context) { + const deviceIds = yield context.df.callActivity("getNewDeviceIds"); ++ // Run multiple device provisioning flows in parallel + const provisioningTasks = []; + var id = 0; + for (const deviceId of deviceIds) { + const child_id = context.df.instanceId + `:${id}`; + const provisionTask = context.df.callSubOrchestrator( + "deviceProvisioningOrchestration", + deviceId, + child_id + ); + provisioningTasks.push(provisionTask); + id++; + } ++ yield context.df.Task.all(provisioningTasks); ++ // ... +}); +``` + # [Python](#tab/python) ```Python |
azure-functions | Quickstart Js Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md | Title: Create your first durable function in Azure using JavaScript description: Create and publish an Azure Durable Function in JavaScript using Visual Studio Code. Previously updated : 05/07/2020 Last updated : 02/13/2023 ms.devlang: javascript +zone_pivot_groups: functions-nodejs-model # Create your first durable function in JavaScript- ++>[!NOTE] +>The v4 programming model for authoring Functions in Node.js is currently in preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](../functions-reference-node.md). +> +>Use the selector at the top to choose the programming model of your choice for completing this quickstart. ++ ## Prerequisites To complete this tutorial: * Install [Visual Studio Code](https://code.visualstudio.com/download). * Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension+* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension version `1.10.4` or above. * Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md).+* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5085` or above. * Durable Functions require an Azure storage account. You need an Azure subscription. -* Make sure that you have version 10.x or 12.x of [Node.js](https://nodejs.org/) installed. +* Make sure that you have version 16.x+ of [Node.js](https://nodejs.org/) installed. +* Make sure that you have version 18.x+ of [Node.js](https://nodejs.org/) installed. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] To complete this tutorial: In this section, you use Visual Studio Code to create a local Azure Functions project. -1. In Visual Studio Code, press F1 (or Ctrl/Cmd+Shift+P) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. +1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. -  +  -1. Choose an empty folder location for your project and choose **Select**. +2. Choose an empty folder location for your project and choose **Select**. -1. Following the prompts, provide the following information: +3. Following the prompts, provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a language for your function app project | JavaScript | Create a local Node.js Functions project. | + | Select a JavaScript programming model | Model V3 | Choose the V3 programming model. | + | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | + | Select a template for your project's first function | Skip for now | | + | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | ++3. Following the prompts, provide the following information: | Prompt | Value | Description | | | -- | -- | | Select a language for your function app project | JavaScript | Create a local Node.js Functions project. |+ | Select a JavaScript programming model | Model V4 (Preview) | Choose the V4 programming model (in preview). | | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | | Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | + Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. -A package.json file is also created in the root folder. +A `package.json` file is also created in the root folder. ## Install the Durable Functions npm package To work with Durable Functions in a Node.js function app, you use a library called `durable-functions`.+To use the V4 programming model, you need to install the preview `v3.x` version of `durable-functions`. -1. Use the *View* menu or Ctrl+Shift+` to open a new terminal in VS Code. +1. Use the *View* menu or <kbd>Ctrl + Shift + `</kbd> to open a new terminal in VS Code. -1. Install the `durable-functions` npm package by running `npm install durable-functions` in the root directory of the function app. +2. Install the `durable-functions` npm package by running `npm install durable-functions` in the root directory of the function app. +2. Install the `durable-functions` npm package preview version by running `npm install durable-functions@preview` in the root directory of the function app. ## Creating your functions The most basic Durable Functions app contains three functions: * *Activity function* - called by the orchestrator function, performs work, and optionally returns a value. * *Client function* - a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function. + ### Orchestrator function You use a template to create the durable function code in your project. You use a template to create the durable function code in your project. | Prompt | Value | Description | | | -- | -- | | Select a template for your function | Durable Functions orchestrator | Create a Durable Functions orchestration |+ | Choose a durable storage type. | Azure Storage (Default) | Select the storage backend used for Durable Functions. | | Provide a function name | HelloOrchestrator | Name of your durable function | You've added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/index.js* to see the orchestrator function. Each call to `context.df.callActivity` invokes an activity function named `Hello`. Next, you'll add the referenced `Hello` activity function. | Select a template for your function | Durable Functions activity | Create an activity function | | Provide a function name | Hello | Name of your activity function | -You've added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/index.js* to see that it's taking a name as input and returning a greeting. An activity function is where you'll perform actions such as making a database call or performing a computation. +You've added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/index.js* to see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow: work such as making a database call or performing some non-deterministic computation. Finally, you'll add an HTTP triggered function that starts the orchestration. Finally, you'll add an HTTP triggered function that starts the orchestration. You've added an HTTP triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/index.js* to see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. You now have a Durable Functions app that can be run locally and deployed to Azure.++One of the benefits of the V4 Programming Model is the flexibility of where you write your functions. +In the V4 Model, you can use a single template to create all three functions in one file in your project. ++1. In the command palette, search for and select `Azure Functions: Create Function...`. ++1. Following the prompts, provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a template for your function | Durable Functions orchestrator | Create a file with a Durable Functions orchestration, an Activity function, and a Durable Client starter function. | + | Choose a durable storage type | Azure Storage (Default) | Select the storage backend used for Durable Functions. | + | Provide a function name | hello | Name used for your durable functions | ++Open *src/functions/hello.js* to view the functions you created. ++You've created an orchestrator called `helloOrchestrator` to coordinate activity functions. Each call to `context.df.callActivity` invokes an activity function called `hello`. ++You've also added the `hello` activity function that is invoked by the orchestrator. In the same file, you can see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow: work such as making a database call or performing some non-deterministic computation. ++Lastly, you've also added an HTTP triggered function that starts an orchestration. In the same file, you can see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. ++You now have a Durable Functions app that can be run locally and deployed to Azure. ## Test the function locally ++> [!NOTE] +> To run the V4 programming model, your app needs to have the `EnableWorkerIndexing` feature flag set. When running locally, you need to set `AzureWebJobsFeaturesFlags` to value of `EnableWorkerIndexing` in your `local.settings.json` file. This should already be set when creating your project. To verify, check the following line exists in your `local.settings.json` file, and add it if it doesn't. +> +> ```json +> "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" +> ``` ++ Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio Code. 1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/index.js*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel.+1. To test your function, set a breakpoint in the `hello` activity function code (*src/functions/hello.js*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. - > [!NOTE] - > Refer to the [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging) for more information on debugging. + > [!NOTE] + > Refer to the [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging) for more information on debugging. -1. Durable Functions requires an Azure Storage account to run. When VS Code prompts you to select a storage account, choose **Select storage account**. +2. Durable Functions requires an Azure Storage account to run. When VS Code prompts you to select a storage account, choose **Select storage account**. -  +  -1. Following the prompts, provide the following information to create a new storage account in Azure. +3. Following the prompts, provide the following information to create a new storage account in Azure. | Prompt | Value | Description | | | -- | -- | Azure Functions Core Tools lets you run an Azure Functions project on your local | Select a resource group | *unique name* | Name of the resource group to create | | Select a location | *region* | Select a region close to you | -1. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. +4. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. -  +  -1. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. +5. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. +5. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`helloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/helloOrchestrator`. The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. -1. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman to issue the GET request. +6. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman to issue the GET request. - The request will query the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like: + The request queries the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like: + ::: zone pivot="nodejs-model-v3" ```json { "name": "HelloOrchestrator", Azure Functions Core Tools lets you run an Azure Functions project on your local "lastUpdatedTime": "2020-03-18T21:54:54Z" } ```+ ::: zone-end + ::: zone pivot="nodejs-model-v4" + ```json + { + "name": "helloOrchestrator", + "instanceId": "6ba3f77933b1461ea1a3828c013c9d56", + "runtimeStatus": "Completed", + "input": "", + "customStatus": null, + "output": [ + "Hello, Tokyo", + "Hello, Seattle", + "Hello, Cairo" + ], + "createdTime": "2023-02-13T23:02:21Z", + "lastUpdatedTime": "2023-02-13T23:02:25Z" + } + ``` + ::: zone-end -1. To stop debugging, press **Shift + F5** in VS Code. +7. To stop debugging, press **Shift + F5** in VS Code. After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. After you've verified that the function runs correctly on your local computer, i [!INCLUDE [functions-publish-project-vscode](../../../includes/functions-publish-project-vscode.md)] ++## Update app settings ++To enable your V4 programming model app to run in Azure, you need to add the `EnableWorkerIndexing` flag under the `AzureWebJobsFeatureFlags` app setting. ++1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. +2. Choose your new function app, then type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>. +3. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. ++ ## Test your function in Azure +> [!NOTE] +> To use the V4 node programming model, make sure your app is running on at least version 4.16.5 of the Azure Functions runtime. + 1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator` 2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app. |
azure-functions | Quickstart Python Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md | To complete this tutorial: * Durable Functions require an Azure storage account. You need an Azure subscription. -* Make sure that you have version 3.7, 3.8, or 3.9 of [Python](https://www.python.org/) installed. +* Make sure that you have version 3.7, 3.8, 3.9, or 3.10 of [Python](https://www.python.org/) installed. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] In this section, you use Visual Studio Code to create a local Azure Functions pr | | -- | -- | | Select a language for your function app project | Python | Create a local Python Functions project. | | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. |- | Python version | Python 3.7, 3.8, or 3.9 | Visual Studio Code will create a virtual environment with the version you select. | + | Python version | Python 3.7, 3.8, 3.9, or 3.10 | Visual Studio Code will create a virtual environment with the version you select. | | Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. | ::: zone-end In this section, you use Visual Studio Code to create a local Azure Functions pr | | -- | -- | | Select a language | Python (Programming Model V2) | Create a local Python Functions project using the V2 programming model. | | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. |- | Python version | Python 3.7, 3.8, or 3.9 | Visual Studio Code will create a virtual environment with the version you select. | + | Python version | Python 3.7, 3.8, 3.9, or 3.10 | Visual Studio Code will create a virtual environment with the version you select. | | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. | ::: zone-end You now have a Durable Functions app that can be run locally and deployed to Azu ::: zone pivot="python-mode-decorators" +> [!NOTE] +> Using [Extension Bundles](../functions-bindings-register.md#extension-bundles) is not currently supported when trying out the Python V2 programming model with Durable Functions, so you will need to manage your extensions manually. +> To do this, remove the `extensionBundle` section of your `host.json` as described [here](../functions-run-local.md#install-extensions) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience. + To create a basic Durable Functions app using these 3 function types, replace the contents of `function_app.py` with the following Python code. ```Python |
azure-functions | Quickstart Ts Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md | + + Title: Create your first durable function in Azure using TypeScript +description: Create and publish an Azure Durable Function in TypeScript using Visual Studio Code. ++ Last updated : 02/13/2023++ms.devlang: typescript ++zone_pivot_groups: functions-nodejs-model +++# Create your first durable function in TypeScript ++*Durable Functions* is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. ++In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure. +++>[!NOTE] +>The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](../functions-reference-node.md). +> +>Use the selector at the top to choose the programming model of your choice for completing this quickstart. ++ ++## Prerequisites ++To complete this tutorial: ++* Install [Visual Studio Code](https://code.visualstudio.com/download). ++* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension +* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension version `1.10.4` or above. ++* Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). +* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5085` or above. ++* Durable Functions require an Azure storage account. You need an Azure subscription. ++* Make sure that you have version 16.x+ of [Node.js](https://nodejs.org/) installed. +* Make sure that you have version 18.x+ of [Node.js](https://nodejs.org/) installed. +* Make sure that you have [TypeScript](https://www.typescriptlang.org/) v4.x+ installed. +++## <a name="create-an-azure-functions-project"></a>Create your local project ++In this section, you use Visual Studio Code to create a local Azure Functions project. ++1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd + Shift + P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. ++  ++2. Choose an empty folder location for your project and choose **Select**. ++3. Following the prompts, provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a language for your function app project | TypeScript | Create a local Node.js Functions project using TypeScript. | + | Select a JavaScript programming model | Model V3 | Choose the V3 programming model. | + | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | + | Select a template for your project's first function | Skip for now | | + | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | ++3. Following the prompts, provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a language for your function app project | TypeScript | Create a local Node.js Functions project using TypeScript. | + | Select a JavaScript programming model | Model V4 (Preview) | Choose the V4 programming model (in preview). | + | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | + | Select a template for your project's first function | Skip for now | | + | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | +++Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. ++A `package.json` file and a `tsconfig.json` file are also created in the root folder. ++## Install the Durable Functions npm package ++To work with Durable Functions in a Node.js function app, you use a library called `durable-functions`. +To use the V4 programming model, you need to install the preview `v3.x` version of `durable-functions`. ++1. Use the *View* menu or <kbd>Ctrl + Shift + `</kbd> to open a new terminal in VS Code. ++2. Install the `durable-functions` npm package by running `npm install durable-functions` in the root directory of the function app. +2. Install the `durable-functions` npm package preview version by running `npm install durable-functions@preview` in the root directory of the function app. ++## Creating your functions ++The most basic Durable Functions app contains three functions: ++* *Orchestrator function* - describes a workflow that orchestrates other functions. +* *Activity function* - called by the orchestrator function, performs work, and optionally returns a value. +* *Client function* - a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function. +++### Orchestrator function ++You use a template to create the durable function code in your project. ++1. In the command palette, search for and select `Azure Functions: Create Function...`. ++1. Following the prompts, provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a template for your function | Durable Functions orchestrator | Create a Durable Functions orchestration | + | Choose a durable storage type. | Azure Storage (Default) | Select the storage backend used for Durable Functions. | + | Provide a function name | HelloOrchestrator | Name of your durable function | ++You've added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/index.ts* to see the orchestrator function. Each call to `context.df.callActivity` invokes an activity function named `Hello`. ++Next, you'll add the referenced `Hello` activity function. ++### Activity function ++1. In the command palette, search for and select `Azure Functions: Create Function...`. ++1. Following the prompts, provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a template for your function | Durable Functions activity | Create an activity function | + | Provide a function name | Hello | Name of your activity function | ++You've added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/index.ts* to see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow: work such as making a database call or performing some non-deterministic computation. ++Finally, you'll add an HTTP triggered function that starts the orchestration. ++### Client function (HTTP starter) ++1. In the command palette, search for and select `Azure Functions: Create Function...`. ++1. Following the prompts, provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a template for your function | Durable Functions HTTP starter | Create an HTTP starter function | + | Provide a function name | DurableFunctionsHttpStart | Name of your activity function | + | Authorization level | Anonymous | For demo purposes, allow the function to be called without authentication | ++You've added an HTTP triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/index.ts* to see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. ++You now have a Durable Functions app that can be run locally and deployed to Azure. ++One of the benefits of the V4 Programming Model is the flexibility of where you write your functions. +In the V4 Model, you can use a single template to create all three functions in one file in your project. ++1. In the command palette, search for and select `Azure Functions: Create Function...`. ++1. Following the prompts, provide the following information: ++ | Prompt | Value | Description | + | | -- | -- | + | Select a template for your function | Durable Functions orchestrator | Create a file with a Durable Functions orchestration, an Activity function, and a Durable Client starter function. | + | Choose a durable storage type | Azure Storage (Default) | Select the storage backend used for Durable Functions. | + | Provide a function name | hello | Name used for your durable functions | ++Open *src/functions/hello.ts* to view the functions you created. ++You've created an orchestrator called `helloOrchestrator` to coordinate activity functions. Each call to `context.df.callActivity` invokes an activity function called `hello`. ++You've also added the `hello` activity function that is invoked by the orchestrator. In the same file, you can see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow: work such as making a database call or performing some non-deterministic computation. ++Lastly, you've also added an HTTP triggered function that starts an orchestration. In the same file, you can see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. ++You now have a Durable Functions app that can be run locally and deployed to Azure. ++## Test the function locally ++Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio Code. +++> [!NOTE] +> To run the V4 programming model, your app needs to have the `EnableWorkerIndexing` feature flag set. When running locally, you need to set `AzureWebJobsFeaturesFlags` to value of `EnableWorkerIndexing` in your `local.settings.json` file. This should already be set when creating your project. To verify, check the following line exists in your `local.settings.json` file, and add it if it doesn't. +> +> ```json +> "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" +> ``` ++++1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/index.ts*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. +1. To test your function, set a breakpoint in the `hello` activity function code (*src/functions/hello.ts*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. ++ > [!NOTE] + > Refer to the [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging) for more information on debugging. ++2. Durable Functions requires an Azure Storage account to run. When VS Code prompts you to select a storage account, choose **Select storage account**. ++  ++3. Following the prompts, provide the following information to create a new storage account in Azure. ++ | Prompt | Value | Description | + | | -- | -- | + | Select subscription | *name of your subscription* | Select your Azure subscription | + | Select a storage account | Create a new storage account | | + | Enter the name of the new storage account | *unique name* | Name of the storage account to create | + | Select a resource group | *unique name* | Name of the resource group to create | + | Select a location | *region* | Select a region close to you | ++4. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. ++  ++5. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. +5. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`helloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/helloOrchestrator`. ++ The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. ++6. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman to issue the GET request. ++ The request queries the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like: ++ ::: zone pivot="nodejs-model-v3" + ```json + { + "name": "HelloOrchestrator", + "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a", + "runtimeStatus": "Completed", + "input": null, + "customStatus": null, + "output": [ + "Hello Tokyo!", + "Hello Seattle!", + "Hello London!" + ], + "createdTime": "2020-03-18T21:54:49Z", + "lastUpdatedTime": "2020-03-18T21:54:54Z" + } + ``` + ::: zone-end + ::: zone pivot="nodejs-model-v4" + ```json + { + "name": "helloOrchestrator", + "instanceId": "6ba3f77933b1461ea1a3828c013c9d56", + "runtimeStatus": "Completed", + "input": "", + "customStatus": null, + "output": [ + "Hello, Tokyo", + "Hello, Seattle", + "Hello, Cairo" + ], + "createdTime": "2023-02-13T23:02:21Z", + "lastUpdatedTime": "2023-02-13T23:02:25Z" + } + ``` + ::: zone-end ++7. To stop debugging, press **Shift + F5** in VS Code. ++After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. +++++## Update app settings ++To enable your V4 programming model app to run in Azure, you need to add the `EnableWorkerIndexing` flag under the `AzureWebJobsFeatureFlags` app setting. ++1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. +2. Choose your new function app, then type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>. +3. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>.' +++## Test your function in Azure ++> [!NOTE] +> To use the V4 node programming model, make sure your app is running on at least version 4.16.5 of the Azure Functions runtime. ++1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` +1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator` ++2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app. ++## Next steps ++You have used Visual Studio Code to create and publish a JavaScript durable function app. ++> [!div class="nextstepaction"] +> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) |
azure-functions | Functions Node Upgrade V4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md | + + Title: Upgrade to v4 of the Node.js model for Azure Functions +description: This article shows you how to upgrade your existing function apps running on v3 of the Node.js programming model to v4. + Last updated : 03/15/2023+ms.devlang: javascript, typescript ++++# Upgrade to version 4 of the Node.js programming model for Azure Functions ++This article discusses the differences between version 3 and version 4 of the Node.js programming model and how to upgrade an existing v3 app. If you want to create a brand new v4 app instead of upgrading an existing v3 app, see the tutorial for either [VS Code](./create-first-function-cli-node.md) or [Azure Functions Core Tools](./create-first-function-vs-code-node.md). This article uses "TIP" sections to highlight the most important concrete actions you should take to upgrade your app. ++Version 4 was designed with the following goals in mind: ++- Provide a familiar and intuitive experience to Node.js developers +- Make the file structure flexible with support for full customization +- Switch to a code-centric approach for defining function configuration +++## Requirements ++Version 4 of the Node.js programming model requires the following minimum versions: ++- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0-alpha.8+ +- [Node.js](https://nodejs.org/en/download/releases/) v18+ +- [TypeScript](https://www.typescriptlang.org/) v4+ +- [Azure Functions Runtime](./functions-versions.md) v4.16+ +- [Azure Functions Core Tools](./functions-run-local.md) v4.0.4915+ (if running locally) ++## Include the npm package ++For the first time, the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package contains the primary source code that backs the Node.js programming model. In previous versions, that code shipped directly in Azure and the npm package only had the TypeScript types. Moving forward, you need to include this package for both TypeScript and JavaScript apps. You _can_ include the package for existing v3 apps, but it isn't required. ++> [!TIP] +> Make sure the `@azure/functions` package is listed in the `dependencies` section (not `devDependencies`) of your `package.json` file. You can install v4 with the command +> ``` +> npm install @azure/functions@preview +> ``` ++## Set your app entry point ++In v4 of the programming model, you can structure your code however you want. The only files you need at the root of your app are `host.json` and `package.json`. Otherwise, you define the file structure by setting the `main` field in your `package.json` file. The `main` field can be set to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). Common values for the `main` field may be: +- TypeScript + - `dist/src/index.js` + - `dist/src/functions/*.js` +- JavaScript + - `src/index.js` + - `src/functions/*.js` ++> [!TIP] +> Make sure you define a `main` field in your `package.json` file ++## Switch the order of arguments ++The trigger input is now the first argument to your function handler instead of the invocation context. The invocation context, now the second argument, was simplified in v4 and isn't as required as the trigger input - it can be left off if you aren't using it. ++> [!TIP] +> Switch the order of your arguments. For example if you are using an http trigger, switch `(context, request)` to either `(request, context)` or just `(request)` if you aren't using the context. ++## Define your function in code ++Say goodbye 👋 to `function.json` files! All of the configuration that was previously specified in a `function.json` file is now defined directly in your TypeScript or JavaScript files. In addition, many properties now have a default so that you don't have to specify them every time. ++# [v4](#tab/v4) ++```javascript +const { app } = require("@azure/functions"); ++app.http('helloWorld1', { + methods: ['GET', 'POST'], + handler: async (request, context) => { + context.log('Http function processed request'); ++ const name = request.query.get('name') + || await request.text() + || 'world'; ++ return { body: `Hello, ${name}!` }; + } +}); +``` ++# [v3](#tab/v3) ++```javascript +module.exports = async function (context, req) { + context.log('HTTP function processed a request'); ++ const name = req.query.name + || req.body + || 'world'; ++ context.res = { + body: `Hello, ${name}!` + }; +}; +``` ++```json +{ + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "methods": [ + "get", + "post" + ] + }, + { + "type": "http", + "direction": "out", + "name": "res" + } + ] +} +``` ++++> [!TIP] +> Move the config from your `function.json` file to your code. The type of the trigger will correspond to a method on the `app` object in the new model. For example, if you use an `httpTrigger` type in `function.json`, you will now call `app.http()` in your code to register the function. If you use `timerTrigger`, you will now call `app.timer()` and so on. +++## Review your usage of context ++The `context` object has been simplified to reduce duplication and make it easier to write unit tests. For example, we streamlined the primary input and output so that they're only accessed as the argument and return value of your function handler. The primary input and output can't be accessed on the `context` object anymore, but you must still access _secondary_ inputs and outputs on the `context` object. For more information about secondary inputs and outputs, see the [Node.js developer guide](./functions-reference-node.md#extra-inputs-and-outputs). ++### Get the primary input as an argument ++The primary input is also called the "trigger" and is the only required input or output. You must have one and only one trigger. ++# [v4](#tab/v4) ++v4 only supports one way of getting the trigger input, as the first argument. ++```javascript +async function helloWorld1(request, context) { + const onlyOption = request; +``` ++# [v3](#tab/v3) ++v3 supports several different ways of getting the trigger input. ++```javascript +async function helloWorld1(context, request) { + const option1 = request; + const option2 = context.req; + const option3 = context.bindings.req; +``` ++++> [!TIP] +> Make sure you aren't using `context.req` or `context.bindings` to get the input. ++### Set the primary output as your return value ++# [v4](#tab/v4) ++v4 only supports one way of setting the primary output, through the return value. ++```javascript +return { + body: `Hello, ${name}!` +}; +``` ++# [v3](#tab/v3) ++v3 supports several different ways of setting the primary output. ++```javascript +// Option 1 +context.res = { + body: `Hello, ${name}!` +}; +// Option 2, but you can't use this option with any async code: +context.done(null, { + body: `Hello, ${name}!` +}); +// Option 3, but you can't use this option with any async code: +context.res.send(`Hello, ${name}!`); +// Option 4, if "name" in "function.json" is "res": +context.bindings.res = { + body: `Hello, ${name}!` +} +// Option 5, if "name" in "function.json" is "$return": +return { + body: `Hello, ${name}!` +}; +``` ++++> [!TIP] +> Make sure you are always returning the output in your function handler, instead of setting it with the `context` object. ++### Create a test context ++v3 doesn't support creating an invocation context outside of the Azure Functions runtime, making it difficult to author unit tests. v4 allows you to create an instance of the invocation context, although the information during tests isn't detailed unless you add it yourself. ++# [v4](#tab/v4) ++```javascript +const testInvocationContext = new InvocationContext({ + functionName: 'testFunctionName', + invocationId: 'testInvocationId' +}); +``` ++# [v3](#tab/v3) ++Not possible 😮 ++++## Review your usage of HTTP types ++The http request and response types are now a subset of the [fetch standard](https://developer.mozilla.org/docs/Web/API/fetch) instead of being types unique to Azure Functions. The types use Node.js's [`undici`](https://undici.nodejs.org/) package, which follows the fetch standard and is [currently being integrated](https://github.com/nodejs/undici/issues/1737) into Node.js core. ++### HttpRequest ++# [v4](#tab/v4) +- _**Body**_. You can access the body using a method specific to the type you would like to receive: + ```javascript + const body = await request.text(); + const body = await request.json(); + const body = await request.formData(); + const body = await request.arrayBuffer(); + const body = await request.blob(); + ``` +- _**Header**_: + ```javascript + const header = request.headers.get('content-type'); + ``` +- _**Query param**_: + ```javascript + const name = request.query.get('name'); + ``` ++# [v3](#tab/v3) +- _**Body**_. You can access the body in several ways, but the type returned isn't always consistent: + ```javascript + // returns a string, object, or Buffer + const body = request.body; + // returns a string + const body = request.rawBody; + // returns a Buffer + const body = request.bufferBody; + // returns an object representing a form + const body = await request.parseFormBody(); + ``` +- _**Header**_. A header can be retrieved in several different ways: + ```javascript + const header = request.get('content-type'); + const header = request.headers.get('content-type'); + const header = context.bindingData.headers['content-type']; + ``` +- _**Query param**_: + ```javascript + const name = request.query.name; + ``` +++### HttpResponse ++# [v4](#tab/v4) +- _**Status**_: + ```javascript + return { status: 200 }; + ``` +- _**Body**_: + ```javascript + return { body: "Hello, world!" }; + ``` +- _**Header**_. You can set the header in two ways, depending if you're using the `HttpResponse` class or `HttpResponseInit` interface: + ```javascript + const response = new HttpResponse(); + response.headers.set('content-type', 'application/json'); + return response; + ``` + ```javascript + return { + headers: { 'content-type': 'application/json' } + }; + ``` ++# [v3](#tab/v3) +- _**Status**_. A status can be set in several different ways: + ```javascript + context.res.status(200); + context.res = { status: 200} + context.res = { statusCode: 200 }; + return { status: 200}; + return { statusCode: 200 }; + ``` +- _**Body**_. A body can be set in several different ways: + ```javascript + context.res.send("Hello, world!"); + context.res.end("Hello, world!"); + context.res = { body: "Hello, world!" } + return { body: "Hello, world!" }; + ``` +- _**Header**_. A header can be set in several different ways: + ```javascript + response.set('content-type', 'application/json'); + response.setHeader('content-type', 'application/json'); + response.headers = { 'content-type': 'application/json' } + context.res = { + headers: { 'content-type': 'application/json' } + }; + return { + headers: { 'content-type': 'application/json' } + }; + ``` ++++> [!TIP] +> Update any logic using the http request or response types to match the new methods. If you are using TypeScript, you should receive build errors if you use old methods. ++## Troubleshooting ++If you see the following error, make sure you [set the `EnableWorkerIndexing` flag](./functions-reference-node.md#enable-v4-programming-model) and you're using the minimum version of all [requirements](#requirements): ++> No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.). |
azure-functions | Functions Reference Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md | +zone_pivot_groups: functions-nodejs-model + # Azure Functions JavaScript developer guide -This guide contains detailed information to help you succeed developing Azure Functions using JavaScript. +This guide is an introduction to developing Azure Functions using JavaScript or TypeScript. The article assumes that you've already read the [Azure Functions developer guide](functions-reference.md). ++> [!IMPORTANT] +> The content of this article changes based on your choice of the Node.js programming model in the selector at the top of this page. The version you choose should match the version of the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package you are using in your app. If you do not have that package listed in your `package.json`, the default is v3. Learn more about the differences between v3 and v4 in the [upgrade guide](./functions-node-upgrade-v4.md). -As an Express.js, Node.js, or JavaScript developer, if you're new to Azure Functions, please consider first reading one of the following articles: +As a JavaScript developer, you might also be interested in one of the following articles: | Getting started | Concepts| Guided learning | | -- | -- | -- | | <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li><li>[Node.js function using the Azure portal](functions-create-function-app-portal.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/training/modules/shift-nodejs-express-apis-serverless/)</li></ul> | ++## Supported versions ++The following table shows each version of the Node.js programming model along with its supported versions of the Azure Functions runtime and Node.js. ++| [Programming Model Version](https://www.npmjs.com/package/@azure/functions?activeTab=versions) | Support Level | [Functions Runtime Version](./functions-versions.md) | [Node.js Version](https://github.com/nodejs/release#release-schedule) | Description | +| - | - | | | | +| 4.x | Preview | 4.x | 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. | +| 3.x | GA | 4.x | 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file | +| 2.x | GA (EOL) | 3.x | 14.x, 12.x, 10.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | +| 1.x | GA (EOL) | 2.x | 10.x, 8.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | ++ ## JavaScript function basics A JavaScript (Node.js) function is an exported `function` that executes when triggered ([triggers are configured in function.json](functions-triggers-bindings.md)). The first argument passed to every function is a `context` object, which is used for receiving and sending binding data, logging, and communicating with the runtime. FunctionsProject | - node_modules | - host.json | - package.json- | - extensions.csproj ``` At the root of the project, there's a shared [host.json](functions-host-json.md) file that can be used to configure the function app. Each function has a folder with its own code file (.js) and binding configuration file (function.json). The name of `function.json`'s parent directory is always the name of your function. -The binding extensions required in [version 2.x](functions-versions.md) of the Functions runtime are defined in the `extensions.csproj` file, with the actual library files in the `bin` folder. When developing locally, you must [register binding extensions](./functions-bindings-register.md#extension-bundles). When developing functions in the Azure portal, this registration is done for you. +++## Enable v4 programming model ++The following application setting is required to run the v4 programming model while it is in preview: +- Name: `AzureWebJobsFeatureFlags` +- Value: `EnableWorkerIndexing` ++If you're running locally using [Azure Functions Core Tools](functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice: ++# [Azure CLI](#tab/azure-cli-set-indexing-flag) ++Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. ++```azurecli +az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing +``` ++# [Azure PowerShell](#tab/azure-powershell-set-indexing-flag) ++Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. ++```azurepowershell +Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} +``` ++# [VS Code](#tab/vs-code-set-indexing-flag) ++1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed +1. Press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. +1. Choose your subscription and function app when prompted +1. For the name, type `AzureWebJobsFeatureFlags` and press <kbd>Enter</kbd>. +1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. ++++## Folder structure ++The recommended folder structure for a JavaScript project looks like the following example: ++``` +<project_root>/ + | - .vscode/ + | - src/ + | | - functions/ + | | | - myFirstFunction.js + | | | - mySecondFunction.js + | - test/ + | | - functions/ + | | | - myFirstFunction.test.js + | | | - mySecondFunction.test.js + | - .funcignore + | - host.json + | - local.settings.json + | - package.json +``` ++The main project folder, *<project_root>*, can contain the following files: ++* *.vscode/*: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings). +* *src/functions/*: The default location for all functions and their related triggers and bindings. +* *test/*: (Optional) Contains the test cases of your function app. +* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *test/* to ignore test cases, and *local.settings.json* to prevent local app settings being published. +* *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md). +* *local.settings.json*: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). +* *package.json*: Contains configuration options like a list of package dependencies, the main entrypoint, and scripts. ++ <a name="#exporting-an-async-function"></a> By default, the Functions runtime looks for your function in `index.js`, where ` Your exported function is passed a number of arguments on execution. The first argument it takes is always a `context` object. -# [2.x+](#tab/v2-v3-v4-export) ---When using the [`async function`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/async_function) declaration or plain JavaScript [Promises](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise) in version 2.x, 3.x, or 4.x of the Functions runtime, you don't need to explicitly call the [`context.done`](#contextdone-method) callback to signal that your function has completed. Your function completes when the exported async function/Promise completes. +When using the [`async function`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/async_function) declaration or plain JavaScript [Promises](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise), you don't need to explicitly call the [`context.done`](#contextdone-method) callback to signal that your function has completed. Your function completes when the exported async function/Promise completes. The following example is a simple function that logs that it was triggered and immediately completes execution. module.exports = async function (context) { }; ``` -When exporting an async function, you can also configure an output binding to take the `return` value. This is recommended if you only have one output binding. +When exporting an async function, you can also configure an output binding to take the `return` value. This option is recommended if you only have one output binding. -# [1.x](#tab/v1-export) --If your function is synchronous (doesn't return a Promise), you must pass the `context` object, as calling `context.done` is required for correct use. +If your function is synchronous (doesn't return a Promise), you must pass the `context` object, as calling `context.done` is required for correct use. This option isn't recommended, for more information see [Use `async` and `await`](#use-async-and-await). ```javascript // You should include `context` module.exports = function(context, myTrigger, myInput, myOtherInput) { }; ``` --- ### Returning from the function To assign an output using `return`, change the `name` property to `$return` in `function.json`. To define the data type for an input binding, use the `dataType` property in the Options for `dataType` are: `binary`, `stream`, and `string`. +++## Registering a function ++The programming model loads your functions based on the `main` field in your `package.json`. This field can be set to a single file like `src/index.js` or a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)) specifying multiple files like `src/functions/*.js`. ++In order to register a function, you must import the `app` object from the `@azure/functions` npm module and call the method specific to your trigger type. The first argument when registering a function will always be the function name. The second argument is an `options` object specifying configuration for your trigger, your handler, and any other inputs or outputs. In some cases where trigger configuration is not necessary, you can pass the handler directly as the second argument instead of an `options` object. ++Registering a function can be done from any file in your project, as long as that file is loaded (directly or indirectly) based on the `main` field in your `package.json` file. The function should be registered at a global scope because you can't register functions once executions have started. ++The following example is a simple function that logs that it was triggered and responds with `Hello, world!`. ++```javascript +const { app } = require('@azure/functions'); ++app.http('httpTrigger1', { + methods: ['POST', 'GET'], + handler: async (_request, context) => { + context.log('Http function processed request'); ++ return { body: 'Hello, world!' }; + } +}); +``` ++## Inputs and outputs ++Your function is required to have exactly one primary input called the trigger. It may also have secondary inputs, a primary output called the return output, and/or secondary outputs. Inputs and outputs are also referred to as bindings outside the context of the Node.js programming model. Before v4 of the model, these bindings were configured in `function.json` files. ++### Trigger input ++The trigger is the only required input or output. For most trigger types, you register a function by using a method on the `app` object named after the trigger type. You can specify configuration specific to the trigger directly on the `options` argument. For example, an HTTP trigger allows you to specify a route. During execution, the value corresponding to this trigger is passed in as the first argument to your handler. ++```javascript +const { app } = require('@azure/functions'); +app.http('helloWorld1', { + route: 'hello/world', + handler: async (request, ...) => { + ... + } +}); +``` ++### Return output ++The return output is optional, and in some cases configured by default. For example, an http trigger registered with `app.http` is configured to return an http response output automatically. For most output types, you specify the return configuration on the `options` argument with the help of the `output` object exported from the `@azure/functions` module. During execution, you set this output by returning it from your handler. ++```javascript +const { app, output } = require('@azure/functions'); +app.timer('timerTrigger1', { + ... + return: output.storageQueue({ + connection: 'storage_APPSETTING', + ... + }), + handler: () => { + return { hello: 'world' } + } +}); +``` ++### Extra inputs and outputs ++In addition to the trigger and return, you may specify extra inputs or outputs. You must configure these on the `options` argument when registering a function. The `input` and `output` objects exported from the `@azure/functions` module provide type-specific methods to help construct the configuration. During execution, you get or set the values with `context.extraInputs.get` or `context.extraOutputs.set`, passing in the original configuration object as the first argument. ++The following example is a function triggered by a storage queue, with an extra blob input that is copied to an extra blob output. ++```javascript +const { app, input, output } = require('@azure/functions'); ++const blobInput = input.storageBlob({ + connection: 'storage_APPSETTING', + path: 'helloworld/{queueTrigger}', +}); ++const blobOutput = output.storageBlob({ + connection: 'storage_APPSETTING', + path: 'helloworld/{queueTrigger}-copy', +}); ++app.storageQueue('copyBlob1', { + queueName: 'copyblobqueue', + connection: 'storage_APPSETTING', + extraInputs: [blobInput], + extraOutputs: [blobOutput], + handler: (queueItem, context) => { + const blobInputValue = context.extraInputs.get(blobInput); + context.extraOutputs.set(blobOutput, blobInputValue); + } +}); +``` ++### Generic inputs and outputs ++The `app`, `trigger`, `input`, and `output` objects exported by the `@azure/functions` module provide type-specific methods for most types. For all the types that aren't supported, a `generic` method has been provided to allow you to manually specify the configuration. The `generic` method can also be used if you want to change the default settings provided by a type-specific method. ++The following example is a simple http triggered function using generic methods instead of type-specific methods. ++```javascript +const { app, output, trigger } = require('@azure/functions'); ++app.generic('helloWorld1', { + trigger: trigger.generic({ + type: 'httpTrigger', + methods: ['GET'] + }), + return: output.generic({ + type: 'http' + }), + handler: async (request: HttpRequest, context: InvocationContext) => { + context.log(`Http function processed request for url "${request.url}"`); ++ return { body: `Hello, world!` }; + } +}); +``` +++ ## context object The runtime uses a `context` object to pass data to and from your function and the runtime. Used to read and set data from bindings and for writing to logs, the `context` object is always the first parameter passed to a function. Returns a named object that contains trigger metadata and function invocation da ## context.done method -# [2.x](#tab/v2-v3-v4-done) --In 2.x, 3.x, and 4.x, the function should be marked as async even if there's no awaited function call inside the function, and the function doesn't need to call context.done to indicate the end of the function. +The `context.done` method is deprecated. The function should be marked as async even if there's no awaited function call inside the function, and the function doesn't need to call `context.done` to indicate the end of the function. ```javascript //you don't need an awaited function call inside to use async module.exports = async function (context, req) { context.log("you don't need an awaited function call inside to use async") }; ```-# [1.x](#tab/v1-done) --The **context.done** method is used by 1.x synchronous functions. In 2.x, 3.x, and 4.x, the function should be marked as async even if there's no awaited function call inside the function, and the function doesn't need to call context.done to indicate the end of the function. --```javascript -module.exports = function (context, req) { - // 1.x Synchronous code only - // Even though we set myOutput to have: - // -> text: 'hello world', number: 123 - context.bindings.myOutput = { text: 'hello world', number: 123 }; - - // If we pass an object to the done function... - context.done(null, { myOutput: { text: 'hello there, world', noNumber: true }}); - // the done method overwrites the myOutput binding to be: - // -> text: 'hello there, world', noNumber: true -} -``` ---- ## context.log method ```js Because _error_ is the highest trace level, this trace is written to the output Functions lets you define the threshold trace level for writing to the logs or the console. The specific threshold settings depend on your version of the Functions runtime. -# [2.x+](#tab/v2) - To set the threshold for traces written to the logs, use the `logging.logLevel` property in the host.json file. This JSON object lets you define a default threshold for all functions in your function app, plus you can define specific thresholds for individual functions. To learn more, see [How to configure monitoring for Azure Functions](configure-monitoring.md). -# [1.x](#tab/v1) --To set the threshold for all traces written to logs and the console, use the `tracing.consoleLevel` property in the host.json file. This setting applies to all functions in your function app. The following example sets the trace threshold to enable verbose logging: --```json -{ - "tracing": { - "consoleLevel": "verbose" - } -} -``` --Values of **consoleLevel** correspond to the names of the `context.log` methods. To disable all trace logging to the console, set **consoleLevel** to _off_. For more information, see [host.json v1.x reference](functions-host-json-v1.md). --- ## Log custom telemetry By default, Functions writes output as traces to Application Insights. For more control, you can instead use the [Application Insights Node.js SDK](https://github.com/microsoft/applicationinsights-node.js) to send custom telemetry data to your Application Insights instance. -# [2.x+](#tab/v2-log-custom-telemetry) - ```javascript const appInsights = require("applicationinsights"); appInsights.setup(); module.exports = async function (context, req) { }; ``` -# [1.x](#tab/v1-log-custom-telemetry) +The `tagOverrides` parameter sets the `operation_Id` to the function's invocation ID. This setting enables you to correlate all of the automatically generated and custom telemetry for a given function invocation. -```javascript -const appInsights = require("applicationinsights"); -appInsights.setup(); -const client = appInsights.defaultClient; -module.exports = function (context, req) { - context.log('JavaScript HTTP trigger function processed a request.'); - // Use this with 'tagOverrides' to correlate custom telemetry to the parent function invocation. - var operationIdOverride = {"ai.operation.id":context.operationId}; +## Invocation context - client.trackEvent({name: "my custom event", tagOverrides:operationIdOverride, properties: {customProperty2: "custom property value"}}); - client.trackException({exception: new Error("handled exceptions can be logged with this method"), tagOverrides:operationIdOverride}); - client.trackMetric({name: "custom metric", value: 3, tagOverrides:operationIdOverride}); - client.trackTrace({message: "trace message", tagOverrides:operationIdOverride}); - client.trackDependency({target:"http://dbname", name:"select customers proc", data:"SELECT * FROM Customers", duration:231, resultCode:0, success: true, dependencyTypeName: "ZSQL", tagOverrides:operationIdOverride}); - client.trackRequest({name:"GET /customers", url:"http://myserver/customers", duration:309, resultCode:200, success:true, tagOverrides:operationIdOverride}); +Each invocation of your function is passed an invocation context object, with extra information about the context and methods used for logging. The `context` object is typically the second argument passed to your handler. - context.done(); -}; +The `InvocationContext` class has the following properties: ++| Property | Description | +| | | +| `invocationId` | The ID of the current function invocation. | +| `functionName` | The name of the function. | +| `extraInputs` | Used to get the values of extra inputs. For more information, see [`Extra inputs and outputs`](#extra-inputs-and-outputs). | +| `extraOutputs` | Used to set the values of extra outputs. For more information, see [`Extra inputs and outputs`](#extra-inputs-and-outputs). | +| `retryContext` | The context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies). | +| `traceContext` | The context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/). | +| `triggerMetadata` | Metadata about the trigger input for this invocation other than the value itself. | +| `options` | The options used when registering the function, after they've been validated and with defaults explicitly specified. | ++## Logging ++In Azure Functions, you use the `context.log` method to write logs. When you call `context.log()`, your message is written with the default level "information". Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application telemetry and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md). ++> [!NOTE] +> If you use the alternative Node.js `console.log` method, those logs are tracked at the app-level and will *not* be associated with any specific function. It is *highly recommended* to use `context` for logging instead of `console` so that all logs are associated with a specific function. ++The following example writes a log at the information level, including the invocation ID: ++```javascript +context.log(`Something has happened. Invocation ID: "${context.invocationId}"`); ``` -+### Log levels -The `tagOverrides` parameter sets the `operation_Id` to the function's invocation ID. This setting enables you to correlate all of the automatically generated and custom telemetry for a given function invocation. +In addition to the default `context.log` method, the following methods are available that let you write function logs at specific log levels. ++| Method | Description | +| - | | +| **context.trace(_message_)** | Writes a trace-level event to the logs. | +| **context.debug(_message_)** | Writes a debug-level event to the logs. | +| **context.info(_message_)** | Writes an information-level event to the logs. | +| **context.warn(_message_)** | Writes a warning-level event to the logs. | +| **context.error(_message_)** | Writes an error-level event to the logs. | ++ ## HTTP triggers and bindings When you work with HTTP triggers, you can access the HTTP request and response o } ``` - # [2.x+](#tab/v2-accessing-request-and-response) + ```javascript + return { status: 201, body: "Insert succeeded." }; + ``` ++Request and response keys are in lowercase. ++ - In a 2.x+ function, you can return the response object directly: +## HTTP triggers and bindings ++HTTP triggers, webhook triggers, and HTTP output bindings use `HttpRequest` and `HttpResponse` objects to represent HTTP messages. The classes represent a subset of the [fetch standard](https://developer.mozilla.org/docs/Web/API/fetch), using Node.js's [`undici`](https://undici.nodejs.org/) package. ++### Request ++The request can be accessed as the first argument to your handler for an http triggered function. ++```javascript +async (request, context) => { + context.log(`Http function processed request for url "${request.url}"`); +``` ++The `HttpRequest` object has the following properties: ++| Property | Type | Description | +| -- | | -- | +| **`method`** | `string` | HTTP request method used to invoke this function | +| **`url`** | `string` | Request URL | +| **`headers`** | [`Headers`](https://developer.mozilla.org/docs/Web/API/Headers) | HTTP request headers | +| **`query`** | [`URLSearchParams`](https://developer.mozilla.org/docs/Web/API/URLSearchParams) | Query string parameter keys and values from the URL | +| **`params`** | `HttpRequestParams` | Route parameter keys and values | +| **`user`** | `HttpRequestUser | null` | Object representing logged-in user, either through Functions authentication, SWA Authentication, or null when no such user is logged in. | +| **`body`** | [`ReadableStream | null`](https://developer.mozilla.org/docs/Web/API/ReadableStream) | Body as a readable stream | +| **`bodyUsed`** | `boolean` | A boolean indicating if the body has been read from already | ++In order to access a request or response's body, the following methods can be used: ++| Method | Return Type | +| - | -- | +| **`arrayBuffer()`** | [`Promise<ArrayBuffer>`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) | +| **`blob()`** | [`Promise<Blob>`](https://developer.mozilla.org/docs/Web/API/Blob) | +| **`formData()`** | [`Promise<FormData>`](https://developer.mozilla.org/docs/Web/API/FormData) | +| **`json()`** | `Promise<unknown>` | +| **`text()`** | `Promise<string>` | ++> [!NOTE] +> The body functions can be run only once; subsequent calls will resolve with empty strings/ArrayBuffers. ++### Response ++The response can be set in multiple different ways. +++ **As a simple interface with type `HttpResponseInit`**: This option is the most concise way of returning responses. ```javascript- return { status: 201, body: "Insert succeeded." }; + return { body: `Hello, world!` }; ``` - # [1.x](#tab/v1-accessing-request-and-response) + The `HttpResponseInit` interface has the following properties: - In a 1.x sync function, return the response object using the second argument of `context.done()`: + | Property | Type | Description | + | -- | - | -- | + | **`body`** | `BodyInit` (optional) | HTTP response body as one of [`ArrayBuffer`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer), [`AsyncIterable<Uint8Array>`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array), [`Blob`](https://developer.mozilla.org/docs/Web/API/Blob), [`FormData`](https://developer.mozilla.org/docs/Web/API/FormData), [`Iterable<Uint8Array>`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array), [`NodeJS.ArrayBufferView`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer), [`URLSearchParams`](https://developer.mozilla.org/docs/Web/API/URLSearchParams), `null`, or `string` | + | **`jsonBody`** | `any` (optional) | A JSON-serializable HTTP Response body. If set, the `HttpResponseInit.body` property is ignored in favor of this property | + | **`status`** | `number` (optional) | HTTP response status code. If not set, defaults to `200`. | + | **`headers`** | [`HeadersInit`](https://developer.mozilla.org/docs/Web/API/Headers) (optional) | HTTP response headers | + | **`cookies`** | `Cookie[]` (optional) | HTTP response cookies | +++ **As a class with type `HttpResponse`**: This option provides helper methods for reading and modifying various parts of the response like the headers. ```javascript- // Define a valid response object. - res = { status: 201, body: "Insert succeeded." }; - context.done(null, res); - ``` - + const response = new HttpResponse({ body: `Hello, world!` }); + response.headers.set('content-type', 'application/json'); + return response; + ``` -Request and response keys are in lowercase. + The `HttpResponse` class accepts an optional `HttpResponseInit` as an argument to its constructor and has the following properties: + + | Property | Type | Description | + | -- | - | -- | + | **`status`** | `number` | HTTP response status code | + | **`headers`** | [`Headers`](https://developer.mozilla.org/docs/Web/API/Headers) | HTTP response headers | + | **`cookies`** | `Cookie[]` | HTTP response cookies | + | **`body`** | [`ReadableStream | null`](https://developer.mozilla.org/docs/Web/API/ReadableStream) | Body as a readable stream | + | **`bodyUsed`** | `boolean` | A boolean indicating if the body has been read from already | + ## Scaling and concurrency The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates w ## Node version -The following table shows current supported Node.js versions for each major version of the Functions runtime, by operating system: --| Functions version | Node version (Windows) | Node Version (Linux) | -||| | -| 4.x (recommended) | `~18`<br/>`~16`<br/>`~14` | `node|18`<br/>`node|16`<br/>`node|14` | -| 3.x | `~14`<br/>`~12`<br/>`~10` | `node|14`<br/>`node|12`<br/>`node|10` | -| 2.x | `~12`<br/>`~10`<br/>`~8` | `node|10`<br/>`node|8` | -| 1.x | 6.11.2 (locked by the runtime) | n/a | --You can see the current version that the runtime is using by logging `process.version` from any function. +You can see the current version that the runtime is using by logging `process.version` from any function. See [`supported versions`](#supported-versions) for a list of Node.js versions supported by each programming model. ### Setting the Node version az functionapp config set --linux-fx-version "node|18" --name "<MY_APP_NAME>" -- To learn more about Azure Functions runtime support policy, please refer to this [article](./language-support-policy.md). -## Dependency management -In order to use community libraries in your JavaScript code, as is shown in the below example, you need to ensure that all dependencies are installed on your Function App in Azure. --```javascript -// Import the underscore.js library -const _ = require('underscore'); --module.exports = async function(context) { - // Using our imported underscore.js library - const matched_names = _ - .where(context.bindings.myInput.names, {first: 'Carla'}); -} -``` --> [!NOTE] -> You should define a `package.json` file at the root of your Function App. Defining the file lets all functions in the app share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by adding a `package.json` file in the folder of a specific function. --When deploying Function Apps from source control, any `package.json` file present in your repo, will trigger an `npm install` in its folder during deployment. But when deploying via the Portal or CLI, you'll have to manually install the packages. --There are two ways to install packages on your Function App: --### Deploying with Dependencies -1. Install all requisite packages locally by running `npm install`. --2. Deploy your code, and ensure that the `node_modules` folder is included in the deployment. ---### <a name="using-kudu"></a>Using Kudu (Windows only) -1. Go to `https://<function_app_name>.scm.azurewebsites.net`. --2. Select **Debug Console** > **CMD**. --3. Go to `D:\home\site\wwwroot`, and then drag your package.json file to the **wwwroot** folder at the top half of the page. - You can upload files to your function app in other ways also. For more information, see [How to update function app files](functions-reference.md#fileupdate). --4. After the package.json file is uploaded, run the `npm install` command in the **Kudu remote execution console**. - This action downloads the packages indicated in the package.json file and restarts the function app. - ## Environment variables Add your own environment variables to a function app, in both your local and cloud environments, such as operational secrets (connection strings, keys, and endpoints) or environmental settings (such as profiling variables). Access these settings using `process.env` in your function code. When running in Azure, the function app lets you set and use [Application settin ### Access environment variables in code -Access application settings as environment variables using `process.env`, as shown here in the second and third calls to `context.log()` where we log the `AzureWebJobsStorage` and `WEBSITE_SITE_NAME` environment variables: +Access application settings as environment variables using `process.env`, as shown here in the call to `context.log()` where we log the `WEBSITE_SITE_NAME` environment variable: + ```javascript-module.exports = async function (context, myTimer) { - context.log("AzureWebJobsStorage: " + process.env["AzureWebJobsStorage"]); +async function timerTrigger1(context, myTimer) { context.log("WEBSITE_SITE_NAME: " + process.env["WEBSITE_SITE_NAME"]);-}; +} +``` ++++```javascript +async function timerTrigger1(myTimer, context) { + context.log("WEBSITE_SITE_NAME: " + process.env["WEBSITE_SITE_NAME"]); +} ``` + ## <a name="ecmascript-modules"></a>ECMAScript modules (preview) > [!NOTE]-> As ECMAScript modules are currently a preview feature in Node.js 14 and 16 Azure Functions. +> As ECMAScript modules are currently a preview feature in Node.js 14 or higher in Azure Functions. [ECMAScript modules](https://nodejs.org/docs/latest-v14.x/api/esm.html#esm_modules_ecmascript_modules) (ES modules) are the new official standard module system for Node.js. So far, the code samples in this article use the CommonJS syntax. When running Azure Functions in Node.js 14 or higher, you can choose to write your functions using ES modules syntax. To use ES modules in a function, change its filename to use a `.mjs` extension. The following *index.mjs* file example is an HTTP triggered function that uses ES modules syntax to import the `uuid` library and return a value. ++```js +import { v4 as uuidv4 } from 'uuid'; ++async function httpTrigger1(context, req) { + context.res.body = uuidv4(); +}; ++export default httpTrigger1; +``` +++ ```js import { v4 as uuidv4 } from 'uuid'; -export default async function (context, req) { +async function httpTrigger1(req, context) { context.res.body = uuidv4(); }; ``` ++ ## Configure function entry point The `function.json` properties `scriptFile` and `entryPoint` can be used to configure the location and name of your exported function. These properties can be important when your JavaScript is transpiled. module.exports = myObj; In this example, it's important to note that although an object is being exported, there are no guarantees for preserving state between executions. + ## Local debugging -When started with the `--inspect` parameter, a Node.js process listens for a debugging client on the specified port. In Azure Functions 2.x or higher, you can specify arguments to pass into the Node.js process that runs your code by adding the environment variable or App Setting `languageWorkers:node:arguments = <args>`. +When started with the `--inspect` parameter, a Node.js process listens for a debugging client on the specified port. In Azure Functions runtime 2.x or higher, you can specify arguments to pass into the Node.js process that runs your code by adding the environment variable or App Setting `languageWorkers:node:arguments = <args>`. To debug locally, add `"languageWorkers:node:arguments": "--inspect=5858"` under `Values` in your [local.settings.json](./functions-develop-local.md#local-settings-file) file and attach a debugger to port 5858. When debugging using VS Code, the `--inspect` parameter is automatically added using the `port` value in the project's launch.json file. -In version 1.x, setting `languageWorkers:node:arguments` won't work. The debug port can be selected with the [`--nodeDebugPort`](./functions-run-local.md#start) parameter on Azure Functions Core Tools. +In runtime version 1.x, setting `languageWorkers:node:arguments` won't work. The debug port can be selected with the [`--nodeDebugPort`](./functions-run-local.md#start) parameter on Azure Functions Core Tools. > [!NOTE] > You can only configure `languageWorkers:node:arguments` when running the function app locally. Testing your functions includes: * **HTTP end-to-end**: To test a function from its HTTP endpoint, you can use any tool that can make an HTTP request such as cURL, Postman, or JavaScript's fetch method. * **Integration testing**: Integration test includes the function app layer. This testing means you need to control the parameters into the function including the request and the context. The context is unique to each kind of trigger and means you need to know the incoming and outgoing bindings for that [trigger type](functions-triggers-bindings.md?tabs=javascript#supported-bindings). - Learn more about integration testing and mocking the context layer with an experimental GitHub repo, [https://github.com/anthonychu/azure-functions-test-utils](https://github.com/anthonychu/azure-functions-test-utils). ++Learn more about integration testing and mocking the context layer with an experimental GitHub repo, [https://github.com/anthonychu/azure-functions-test-utils](https://github.com/anthonychu/azure-functions-test-utils). + * **Unit testing**: Unit testing is performed within the function app. You can use any tool that can test JavaScript, such as Jest or Mocha. ## TypeScript -When you target version 2.x or higher of the Functions runtime, both [Azure Functions for Visual Studio Code](./create-first-function-cli-typescript.md) and the [Azure Functions Core Tools](functions-run-local.md) let you create function apps using a template that supports TypeScript function app projects. The template generates `package.json` and `tsconfig.json` project files that make it easier to transpile, run, and publish JavaScript functions from TypeScript code with these tools. +Both [Azure Functions for Visual Studio Code](./create-first-function-cli-typescript.md) and the [Azure Functions Core Tools](functions-run-local.md) let you create function apps using a template that supports TypeScript function app projects. The template generates `package.json` and `tsconfig.json` project files that make it easier to transpile, run, and publish JavaScript functions from TypeScript code with these tools. A generated `.funcignore` file is used to indicate which files are excluded when a project is published to Azure. + TypeScript files (.ts) are transpiled into JavaScript files (.js) in the `dist` output directory. TypeScript templates use the [`scriptFile` parameter](#using-scriptfile) in `function.json` to indicate the location of the corresponding .js file in the `dist` folder. The output location is set by the template by using `outDir` parameter in the `tsconfig.json` file. If you change this setting or the name of the folder, the runtime isn't able to find the code to run. + The way that you locally develop and deploy from a TypeScript project depends on your development tool. ### Visual Studio Code There are several ways in which a TypeScript project differs from a JavaScript p To create a TypeScript function app project using Core Tools, you must specify the TypeScript language option when you create your function app. You can do this in one of the following ways: -- Run the `func init` command, select `node` as your language stack, and then select `typescript`. +- Run the `func init` command, select `node` as your language stack, and then select `typescript`. - Run the `func init --worker-runtime typescript` command. +++- Run the `func init --model v4` command, select `node` as your language stack, and then select `typescript`. +- Run the `func init --model v4 --worker-runtime typescript` command. ++ #### Run local To run your function app code locally using Core Tools, use the following commands instead of `func host start`: npm start The `npm start` command is equivalent to the following commands: - `npm run build`-- `func extensions install` - `tsc` - `func start` Before you use the [`func azure functionapp publish`] command to deploy to Azure The following commands prepare and publish your TypeScript project using Core Tools: ```command-npm run build:production +npm run build func azure functionapp publish <APP_NAME> ``` When developing Azure Functions in the serverless hosting model, cold starts are When you use a service-specific client in an Azure Functions application, don't create a new client with every function invocation. Instead, create a single, static client in the global scope. For more information, see [managing connections in Azure Functions](manage-connections.md). + ### Use `async` and `await` When writing Azure Functions in JavaScript, you should write code using the `async` and `await` keywords. Writing code using `async` and `await` instead of callbacks or `.then` and `.catch` with Promises helps avoid two common problems: module.exports = async function (context) { } ``` + ## Next steps For more information, see the following resources: |
azure-government | Documentation Government Aad Auth Qs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-aad-auth-qs.md | The only variation when setting up Azure AD Authorization on the Azure Governmen ```cs //ClientId: Azure AD-> App registrations -> Application ID- //Domain: <tenantname>.onmicrosoft.com + //Domain: <tenantname>.onmicrosoft.us //TenantId: Azure AD -> Properties -> Directory ID "Authentication": { The only variation when setting up Azure AD Authorization on the Azure Governmen ``` 4. Fill out the `ClientId` property with the Client ID for your app from the Azure Government portal. You can find the Client ID by navigating to Azure AD -> App Registrations -> Your Application -> Application ID. 5. Fill out the `TenantId` property with the Tenant ID for your app from the Azure Government portal. You can find the Tenant ID by navigating to Azure AD -> Properties -> Directory ID. -6. Fill out the `Domain` property with `<tenantname>.onmicrosoft.com`. +6. Fill out the `Domain` property with `<tenantname>.onmicrosoft.us`. 7. Open the `startup.cs` file. 8. In your `ConfigureServices` method, add the following code: |
azure-maps | Drawing Package Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md | zone_pivot_groups: drawing-package-version :::zone pivot="drawing-package-v1" -This guide shows you how to prepare your Drawing Package for the [Azure Maps Conversion service] using specific CAD commands to correctly prepare your DWG files and manifest file for the Conversion service. +This guide shows you how to prepare your Drawing Package for the Azure Maps [Conversion service] using specific CAD commands to correctly prepare your DWG files and manifest file for the Conversion service. To start with, make sure your Drawing Package is in .zip format, and contains the following files: The wall layer is meant to represent the physical extents of a facility such as The drawing package Manifest is a JSON file. The Manifest tells the Azure Maps Conversion service how to read the facility DWG files and metadata. Some examples of this information could be the specific information each DWG layer contains, or the geographical location of the facility. -To achieve a successful conversion, all ΓÇ£requiredΓÇ¥ properties must be defined. A sample manifest file can be found inside the [sample drawing package]. This guide doesn't cover properties supported by the manifest. For more information about manifest properties, see [Manifest File Properties]. +To achieve a successful conversion, all ΓÇ£requiredΓÇ¥ properties must be defined. A sample manifest file can be found inside the [sample drawing package]. This guide doesn't cover properties supported by the manifest. For more information about manifest properties, see [Manifest file requirements]. ### Building levels You should now have all the DWG drawings prepared to meet Azure Maps Conversion :::zone pivot="drawing-package-v2" -This guide shows you how to prepare your Drawing Package for the Azure Maps [Conversion service v2]. A Drawing Package contains one or more DWG drawing files for a single facility and a manifest file describing the DWG files. +This guide shows you how to prepare your Drawing Package for the Azure Maps [Conversion service]. A Drawing Package contains one or more DWG drawing files for a single facility and a manifest file describing the DWG files. If you don't have your own package to reference along with this guide, you may download the [sample drawing package v2]. For a better understanding of layers and feature classes, see [Drawing Package R The drawing package Manifest is a JSON file. The Manifest tells the Azure Maps Conversion service how to read the facility DWG files and metadata. Some examples of this information could be the specific information each DWG layer contains, or the geographical location of the facility. -To achieve a successful conversion, all ΓÇ£requiredΓÇ¥ properties must be defined. A sample manifest file can be found inside the [sample drawing package v2]. This guide doesn't cover properties supported by the manifest. For more information about manifest properties, see [Manifest File Properties]. +To achieve a successful conversion, all ΓÇ£requiredΓÇ¥ properties must be defined. A sample manifest file can be found inside the [sample drawing package v2]. This guide doesn't cover properties supported by the manifest. For more information about manifest properties, see [Manifest file requirements]. The manifest can be created manually in any text editor, or can be created using the Azure Maps Creator onboarding tool. This guide provides examples for each. Defining text properties enables you to associate text entities that fall inside :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/dwg-layers.png" alt-text="Screenshot showing the create a new manifest screen of the onboarding tool."::: > [!IMPORTANT]-> Wayfinding support for `Drawing Package 2.0` will be supported in the near future. The following feature class should be defined (non-case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors: +> Wayfinding support for `Drawing Package 2.0` will be available soon. The following feature class should be defined (not case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors: > > 1. Wall > 2. Stair You should now have all the DWG drawings prepared to meet Azure Maps Conversion > [Tutorial: Creating a Creator indoor map] <! Drawing Package v1 links>-[Azure Maps Conversion service]: /rest/api/maps/v2/conversion [sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0-[Manifest File Properties]: drawing-requirements.md#manifest-file-requirements +[Manifest file requirements]: drawing-requirements.md#manifest-file-requirements-1 [Drawing Package Requirements]: drawing-requirements.md [Tutorial: Creating a Creator indoor map]: tutorial-creator-indoor-maps.md <! Drawing Package v2 links>-[Conversion service v2]: https://aka.ms/creator-conversion +[Conversion service]: https://aka.ms/creator-conversion [sample drawing package v2]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%202.0 [Azure Maps Creator onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool-[manifest files]: /azure/azure-maps/drawing-requirements#manifest-file-requirements +[manifest files]: drawing-requirements.md#manifest-file-1 [wayfinding]: creator-indoor-maps.md#wayfinding-preview [facility level]: drawing-requirements.md#facility-level |
azure-maps | Drawing Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md | zone_pivot_groups: drawing-package-version You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the sample [Drawing package]. -For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide]. - ## Prerequisites The drawing package includes drawings saved in DWG format, which is the native file format for Autodesk's AutoCAD® software. The table below outlines the supported entity types and converted map features f | Layer | Entity types | Converted Features | | :-- | :-| :--| [Exterior](#exterior-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Levels -| [Unit](#unit-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Units and Vertical penetrations -| [Wall](#wall-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed), Structures | -| [Door](#door-layer) | Polygon, PolyLine, Line, CircularArc, Circle | Openings -| [Zone](#zone-layer) | Polygon, PolyLine (closed), Circle, Ellipse (closed) | Zones +| [Exterior](#exterior-layer) | POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed) | Levels +| [Unit](#unit-layer) | POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed) | Units and Vertical penetrations +| [Wall](#wall-layer) | POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed), Structures | +| [Door](#door-layer) | POLYGON, POLYLINE, LINE, CIRCULARARC, CIRCLE | Openings +| [Zone](#zone-layer) | POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed) | Zones | [UnitLabel](#unitlabel-layer) | Text (single line) | Not applicable. This layer can only add properties to the unit features from the Units layer. For more information, see the [UnitLabel layer](#unitlabel-layer). | [ZoneLabel](#zonelabel-layer) | Text (single line) | Not applicable. This layer can only add properties to zone features from the ZonesLayer. For more information, see the [ZoneLabel layer](#zonelabel-layer). The DWG file for each level must contain a layer to define that level's perimete No matter how many entity drawings are in the exterior layer, the [resulting facility dataset](tutorial-creator-feature-stateset.md) will contain only one level feature for each DWG file. Additionally: -- Exteriors must be drawn as Polygon, PolyLine (closed), Circle, or Ellipse (closed).+- Exteriors must be drawn as POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed). - Exteriors may overlap, but are dissolved into one geometry. - Resulting level feature must be at least 4 square meters. - Resulting level feature must not be greater 400,000 square meters. -If the layer contains multiple overlapping PolyLines, the PolyLines are dissolved into a single Level feature. Instead, if the layer contains multiple non-overlapping PolyLines, the resulting Level feature has a multi-polygonal representation. +If the layer contains multiple overlapping PolyLines, they're dissolved into a single Level feature. Instead, if the layer contains multiple non-overlapping PolyLines, the resulting Level feature has a multi-polygonal representation. You can see an example of the Exterior layer as the outline layer in the [sample drawing package]. The DWG file for each level defines a layer containing units. Units are navigabl The Units layer should adhere to the following requirements: -- Units must be drawn as Polygon, PolyLine (closed), Circle, or Ellipse (closed).+- Units must be drawn as POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed). - Units must fall inside the bounds of the facility exterior perimeter. - Units must not partially overlap. - Units must not contain any self-intersecting geometry. You can see an example of the Units layer in the [sample drawing package]. The DWG file for each level can contain a layer that defines the physical extents of walls, columns, and other building structure. -- Walls must be drawn as Polygon, PolyLine (closed), Circle, or Ellipse (closed).+- Walls must be drawn as POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed). - The wall layer or layers should only contain geometry that's interpreted as building structure. You can see an example of the Walls layer in the [sample drawing package]. Door openings in an Azure Maps dataset are represented as a single-line segment The DWG file for each level can contain a Zone layer that defines the physical extents of zones. A zone is a non-navigable space that can be named and rendered. Zones can span multiple levels and are grouped together using the zoneSetId property. -- Zones must be drawn as Polygon, PolyLine (closed), or Ellipse (closed).+- Zones must be drawn as POLYGON, POLYLINE (closed), or ELLIPSE (closed). - Zones can overlap. - Zones can fall inside or outside the facility's exterior perimeter. Below is the manifest file for the sample drawing package. Go to the [Sample dra :::zone pivot="drawing-package-v2" -You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service v2]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the [sample drawing package v2]. +You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the [sample drawing package v2]. For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide]. The drawing package includes drawings saved in DWG format, which is the native f You can choose any CAD software to produce the drawings in the drawing package. -The [Conversion service v2] converts the drawing package into map data. The Conversion service works with the AutoCAD DWG file format AC1032. +The [Conversion service] converts the drawing package into map data. The Conversion service works with the AutoCAD DWG file format AC1032. ## Glossary of terms One or more DWG layer(s) can be mapped to a user defined feature class. One inst - All layers should be separated to represent different feature types of the facility. - All entities must fall inside the bounds of the level perimeter.-- Supported AutoCAD entity types: text, mtext, point, arc, circle, line, polyline, ellipse. +- Supported AutoCAD entity types: TEXT, MTEXT, POINT, ARC, CIRCLE, LINE, POLYLINE, ELLIPSE. ### Feature class properties Text entities that fall within the bounds of a closed shape can be associated to that feature as a property. For example, a room feature class might have text that describes the room name and another the room type [sample drawing package v2]. Additionally: - Only TEXT and MTEXT entities will be associated to the feature as a property. All other entity types will be ignored.-- TEXT and MTEXT justification point must fall within the bounds of the closed shape.-- If more than one TEXT property is within the bounds of the closed shape and both are mapped to one property, one will randomly be selected.+- The TEXT and MTEXT justification point must fall within the bounds of the closed shape. +- If more than one TEXT property is within the bounds of the closed shape and both are mapped to one property, one will be randomly selected. ### Facility level The DWG file for each level must contain a layer to define that level's perimete No matter how many entity drawings are in the level perimeter layer, the resulting facility dataset contains only one level feature for each DWG file. Additionally: -- Level perimeters must be drawn as Polygon, Polyline (closed), Circle, or Ellipse (closed).+- Level perimeters must be drawn as POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed). - Level perimeters may overlap but are dissolved into one geometry. - The resulting level feature must be at least 4 square meters. - The resulting level feature must not be greater than 400,000 square meters. -If the layer contains multiple overlapping Polylines, the Polylines are dissolved into a single Level feature. Instead, if the layer contains -multiple nonoverlapping Polylines, the resulting Level feature has a multi-polygonal representation. +If the layer contains multiple overlapping POLYLINES, they're combined into a single Level feature. Instead, if the layer contains +multiple nonoverlapping POLYLINES, the resulting Level feature has a multi-polygonal representation. -You can see an example of the Level perimeter layer as the 'GROS$' layer in the [sample drawing package v2]. +You can see an example of the Level perimeter layer as the `GROS$` layer in the [sample drawing package v2]. ## Manifest file requirements The drawing package must contain a manifest file at the root level and the file must be named **manifest.json**. It describes the DWG files-allowing the  [Conversion service v2] to parse their content. Only the files identified by the manifest are used. Files that are in the drawing package, but aren't properly listed in the manifest, are ignored. +allowing the  [Conversion service] to parse their content. Only the files identified by the manifest are used. Files that are in the drawing package, but aren't properly listed in the manifest, are ignored. The file paths in the buildingLevels object of the manifest file must be relative to the root of the drawing package. The DWG file name must exactly match the name of the facility level. For example, a DWG file for the "Basement" level is *Basement.dwg*. A DWG file for level 2 is named as *level_2.dwg*. Filenames can't contain spaces, you can use an underscore to replace any spaces. -Although there are requirements when you use the manifest objects, not all objects are required. The following table shows the required and optional objects for the 2023-03-01-preview [Conversion service v2]. +Although there are requirements when you use the manifest objects, not all objects are required. The following table shows the required and optional objects for the 2023-03-01-preview [Conversion service]. > [!NOTE] > Unless otherwise specified, all string properties are limited to one thousand characters. Although there are requirements when you use the manifest objects, not all objec | Property | Type | Required | Description  | |-|-|-|--| | `version` | number | TRUE | Manifest schema version. Currently version 2.0 |-|`buildingLevels`| [BuildingLevels](#buildinglevels) object  | TRUE | Specifies the levels of the facility and the files containing the design of the levels. | +|`buildingLevels`| [BuildingLevels] object  | TRUE | Specifies the levels of the facility and the files containing the design of the levels. | |`featureClasses`|Array of [featureClass] objects| TRUE | List of feature class objects that define how layers are read from the DWG drawing file.|-| `georeference` |[Georeference](#georeference) object| FALSE | Contains numerical geographic information for the facility drawing.     | +| `georeference` |[Georeference] object | FALSE | Contains numerical geographic information for the facility drawing.     | | `facilityName` | string | FALSE | The name of the facility. | The next sections detail the requirements for each object. The next sections detail the requirements for each object. | Property | Type | Required | Description | |--|--|-|--|-| `dwgLayers` | Array of strings | TRUE | The name of each layer that defines the feature class property. Each entity on the specified layer is converted to a property. Only the DWG `TEXT` and `MTEXT` entities are converted to properties. All other entities are ignored. | +| `dwgLayers` | Array of strings | TRUE | The name of each layer that defines the feature class property. Each entity on the specified layer is converted to a property. Only the DWG TEXT and MTEXT entities are converted to properties. All other entities are ignored. | |`featureClassPropertyName`| String | TRUE | Name of the feature class property, for example, spaceName or spaceUseType.| #### georeference The JSON in this example shows the manifest file for the sample drawing package. ## Next steps +For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide]. + > [!div class="nextstepaction"]-> [Tutorial: Creating a Creator indoor map](tutorial-creator-indoor-maps.md) +> [Drawing Package Guide] Learn more by reading: Learn more by reading: > [Creator for indoor maps](creator-indoor-maps.md) <! Drawing Package v1 links>-[Conversion service]: /rest/api/maps/v2/conversion [Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0-[Conversion Drawing Package Guide]: drawing-package-guide.md +[Drawing Package Guide]: drawing-package-guide.md [sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0 [OSM Opening Hours]: https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification <! Drawing Package v2 links>-[Conversion service v2]: https://aka.ms/creator-conversion +[Conversion service]: https://aka.ms/creator-conversion [sample drawing package v2]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%202.0-[Georeference]: drawing-package-guide.md#georeference +[Georeference]: #georeference [featureClass]: #featureclass [featureClassProperty]: #featureclassproperty+[BuildingLevels]: #buildinglevels |
azure-maps | Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md | The following list describes common words used with the Azure Maps services. <a name="postal-code"></a> **Postal code**: A series of letters or numbers, or both, in a specific format. The postal-code is used by the postal service of a country/region to divide geographic areas into zones in order to simplify delivery of mail. -<a name="primary-key"></a> **Primary key**: The first of two subscriptions keys provided for Azure Maps shared key authentication. See [Shared key authentication](#shared-key-authentication). +<a name="primary-key"></a> **Primary key**: The first of two subscription keys provided for Azure Maps shared key authentication. See [Shared key authentication](#shared-key-authentication). <a name="prime-meridian"></a> **Prime meridian**: A line of longitude that represents 0-degrees longitude. Generally, longitude values decrease when traveling in a westerly direction until 180 degrees and increase when traveling in easterly directions to -180-degrees. |
azure-maps | How To Use Spatial Io Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md | You can load the Azure Maps spatial IO module using one of the two options: </html> ``` -5. Remember to replace `<Your Azure Maps Key>` with your primary key. Open your HTML file, and you'll see results similar to the following image: +5. Remember to replace `<Your Azure Maps Key>` with your subscription key. Open your HTML file, and you'll see results similar to the following image: <center> |
azure-maps | Quick Android Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md | Create a new Azure Maps account using the following steps: :::image type="content" source="./media/shared/create-account.png" alt-text="A screenshot that shows the Create Maps account pane in the Azure portal."::: -## Get the primary key for your account +## Get the subscription key for your account -Once your Azure Maps account is successfully created, retrieve the primary key that enables you to query the Maps APIs. +Once your Azure Maps account is successfully created, retrieve the subscription key that enables you to query the Maps APIs. 1. Open your Azure Maps account in the portal. 2. In the left pane, select **Authentication**. 3. Copy the **Primary Key** and save it locally to use later in this tutorial. >[!NOTE]-> If you use the Azure subscription key instead of the Azure Maps primary key, your map won't render properly. Also, for security purposes, it is recommended that you rotate between your primary and secondary keys. To rotate keys, update your app to use the secondary key, deploy, then press the cycle/refresh button beside the primary key to generate a new primary key. The old primary key will be disabled. For more information on key rotation, see [Set up Azure Key Vault with key rotation and auditing](../key-vault/secrets/tutorial-rotation-dual.md) +> For security purposes, it is recommended that you rotate between your primary and secondary keys. To rotate keys, update your app to use the secondary key, deploy, then press the cycle/refresh button beside the primary key to generate a new primary key. The old primary key will be disabled. For more information on key rotation, see [Set up Azure Key Vault with key rotation and auditing](../key-vault/secrets/tutorial-rotation-dual.md) :::image type="content" source="./media/quick-android-map/get-key.png" alt-text="A screenshot showing the Azure Maps Primary key in the Azure portal."::: |
azure-maps | Quick Demo Map App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md | Title: 'Quickstart: Interactive map search with Azure Maps' titeSuffix: Microsoft Azure Maps -description: 'Quickstart: Learn how to create interactive, searchable maps. See how to create an Azure Maps account, get a primary key, and use the Web SDK to set up map applications' +description: 'Quickstart: Learn how to create interactive, searchable maps. See how to create an Azure Maps account, get the subscription key, and use the Web SDK to set up map applications' Last updated 12/23/2021-* Get your primary key to use in the demo web application. +* Get your Azure Maps subscription key to use in the demo web application. * Download and open the demo map application. This quickstart uses the Azure Maps Web SDK, however the Azure Maps service can be used with any map control, such as these popular [open-source map controls](open-source-projects.md#third-part-map-control-plugins) that the Azure Maps team has created plugin's for. Create a new Azure Maps account with the following steps: ## Get the subscription key for your account -Once your Azure Maps account is successfully created, retrieve the primary key that enables you to query the Maps APIs. +Once your Azure Maps account is successfully created, retrieve the subscription key that enables you to query the Maps APIs. 1. Open your Maps account in the portal. 2. In the settings section, select **Authentication**. 3. Copy the **Primary Key** and save it locally to use later in this tutorial. >[!NOTE] > This quickstart uses the [Shared Key](azure-maps-authentication.md#shared-key-authentication) authentication approach for demonstration purposes, but the preferred approach for any production environment is to use [Azure Active Directory](azure-maps-authentication.md#azure-ad-authentication) authentication. Once your Azure Maps account is successfully created, retrieve the primary key t 3. Add the **Primary Key** value you got in the preceding section 1. Comment out all of the code in the `authOptions` function, this code is used for Azure Active Directory authentication. 1. Uncomment the last two lines in the `authOptions` function, this code is used for Shared Key authentication, the approach being used in this quickstart.- 1. Replace `<Your Azure Maps Key>` with the **Primary Key** value from the preceding section. + 1. Replace `<Your Azure Maps Key>` with the subscription key value from the preceding section. ## Open the demo application |
azure-maps | Quick Ios App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md | Create a new Azure Maps account with the following steps:  -## Get the primary key for your account +## Get the subscription key for your account Once your Maps account is successfully created, retrieve the primary key that enables you to query the Maps APIs. |
azure-monitor | Availability Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md | Title: Application Insights availability tests description: Set up recurring web tests to monitor availability and responsiveness of your app or website. Previously updated : 11/15/2022 Last updated : 03/22/2023 |
azure-monitor | Availability Standard Tests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md | Title: Availability Standard test - Azure Monitor Application Insights description: Set up Standard tests in Application Insights to check for availability of a website with a single request test. Previously updated : 11/15/2022 Last updated : 03/22/2023 # Standard test |
azure-monitor | Azure Web Apps Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md | Title: Monitor Azure app services performance ASP.NET | Microsoft Docs description: Learn about application performance monitoring for Azure app services by using ASP.NET. Chart load and response time and dependency information, and set alerts on performance. Previously updated : 11/14/2022 Last updated : 03/22/2023 ms.devlang: javascript |
azure-monitor | Azure Web Apps Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md | Title: Monitor Azure app services performance Node.js | Microsoft Docs description: Application performance monitoring for Azure app services using Node.js. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 11/15/2022 Last updated : 03/22/2023 ms.devlang: javascript -Monitoring of your Node.js web applications running on [Azure App Services](../../app-service/index.yml) does not require any modifications to the code. This article will walk you through enabling Azure Monitor Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments. +Monitoring of your Node.js web applications running on [Azure App Services](../../app-service/index.yml) doesn't require any modifications to the code. This article walks you through enabling Azure Monitor Application Insights monitoring and provides preliminary guidance for automating the process for large-scale deployments. ## Enable Application Insights The easiest way to enable application monitoring for Node.js applications runnin Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes. > [!NOTE]-> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below. +> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) in this article. ### Auto-instrumentation through Azure portal The integration is in public preview. The integration adds Node.js SDK, which is :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown."::: -3. Once you have specified which resource to use, you are all set to go. +3. Once you've specified which resource to use, you're all set to go. :::image type="content"source="./media/azure-web-apps-nodejs/app-service-node.png" alt-text="Screenshot of instrument your application."::: Below is our step-by-step troubleshooting guide for extension/agent based monito - Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.` - If it is not running, follow the [enable Application Insights monitoring instructions](#enable-application-insights). + If it isn't running, follow the [enable Application Insights monitoring instructions](#enable-application-insights). - Navigate to *D:\local\Temp\status.json* and open *status.json*. |
azure-monitor | Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md | - Title: Application Insights for console applications | Microsoft Docs -description: Monitor web applications for availability, performance, and usage. - Previously updated : 11/14/2022-----# Application Insights for .NET console applications --[Application Insights](./app-insights-overview.md) lets you monitor your web application for availability, performance, and usage. --You need an [Azure](https://azure.com) subscription. Sign in with a Microsoft account, which you might have for Windows, Xbox Live, or other Microsoft cloud services. Your team might have an organizational subscription to Azure. Ask the owner to add you to it by using your Microsoft account. --> [!NOTE] -> We recommend using the newer [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package and associated instructions from [Application Insights for Worker Service applications (non-HTTP applications)](./worker-service.md) for any console applications. This package is compatible with [Long Term Support (LTS) versions](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core and .NET Framework or higher. ---## Get started --* In the [Azure portal](https://portal.azure.com), [create an Application Insights resource](./create-new-resource.md). -* Take a copy of the connection string. Find the connection string in the **Essentials** dropdown of the new resource you created. -* Install the latest [Microsoft.ApplicationInsights](https://www.nuget.org/packages/Microsoft.ApplicationInsights) package. -* Set the connection string in your code before you track any telemetry (or set the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable). After that, you should be able to manually track telemetry and see it in the Azure portal. - - ```csharp - // You may use different options to create configuration as shown later in this article - TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault(); - configuration.ConnectionString = <Copy connection string from Application Insights Resource Overview>; - var telemetryClient = new TelemetryClient(configuration); - telemetryClient.TrackTrace("Hello World!"); - ``` - - > [!NOTE] - > Telemetry isn't sent instantly. Items are batched and sent by the ApplicationInsights SDK. Console apps exit after calling `Track()` methods. - > - > Telemetry might not be sent unless `Flush()` and `Sleep`/`Delay` are done before the app exits, as shown in the [full example](#full-example) later in this article. `Sleep` isn't required if you're using `InMemoryChannel`. --* Install the latest version of the [Microsoft.ApplicationInsights.DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector) package. It automatically tracks HTTP, SQL, or some other external dependency calls. --You can initialize and configure Application Insights from the code or by using `ApplicationInsights.config` file. Make sure initialization happens as early as possible. --> [!NOTE] -> *ApplicationInsights.config* isn't supported by .NET Core applications. --### Use the config file --For .NET Framework-based applications, by default, the Application Insights SDK looks for the `ApplicationInsights.config` file in the working directory when `TelemetryConfiguration` is being created. Reading the config file isn't supported on .NET Core. --```csharp -TelemetryConfiguration config = TelemetryConfiguration.Active; // Reads ApplicationInsights.config file if present -``` --You can also specify a path to the config file: --```csharp -using System.IO; -TelemetryConfiguration configuration = TelemetryConfiguration.CreateFromConfiguration(File.ReadAllText("C:\\ApplicationInsights.config")); -var telemetryClient = new TelemetryClient(configuration); -``` --For more information, see [Configuration file reference](configuration-with-applicationinsights-config.md). --You can get a full example of the config file by installing the latest version of the [Microsoft.ApplicationInsights.WindowsServer](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer) package. Here's the *minimal* configuration for dependency collection that's equivalent to the code example: --```xml -<?xml version="1.0" encoding="utf-8"?> -<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings"> - <ConnectionString>"Copy connection string from Application Insights Resource Overview"</ConnectionString> - <TelemetryInitializers> - <Add Type="Microsoft.ApplicationInsights.DependencyCollector.HttpDependenciesParsingTelemetryInitializer, Microsoft.AI.DependencyCollector"/> - </TelemetryInitializers> - <TelemetryModules> - <Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector"> - <ExcludeComponentCorrelationHttpHeadersOnDomains> - <Add>core.windows.net</Add> - <Add>core.chinacloudapi.cn</Add> - <Add>core.cloudapi.de</Add> - <Add>core.usgovcloudapi.net</Add> - <Add>localhost</Add> - <Add>127.0.0.1</Add> - </ExcludeComponentCorrelationHttpHeadersOnDomains> - <IncludeDiagnosticSourceActivities> - <Add>Microsoft.Azure.ServiceBus</Add> - <Add>Microsoft.Azure.EventHubs</Add> - </IncludeDiagnosticSourceActivities> - </Add> - </TelemetryModules> - <TelemetryChannel Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel, Microsoft.AI.ServerTelemetryChannel"/> -</ApplicationInsights> --``` --### Configure telemetry collection from code --> [!NOTE] -> Reading the config file isn't supported on .NET Core. --* During application startup, create and configure a `DependencyTrackingTelemetryModule` instance. It must be singleton and must be preserved for the application lifetime. -- ```csharp - var module = new DependencyTrackingTelemetryModule(); - - // prevent Correlation Id to be sent to certain endpoints. You may add other domains as needed. - module.ExcludeComponentCorrelationHttpHeadersOnDomains.Add("core.windows.net"); - //... - - // enable known dependency tracking, note that in future versions, we will extend this list. - // please check default settings in https://github.com/Microsoft/ApplicationInsights-dotnet-server/blob/develop/Src/DependencyCollector/DependencyCollector/ApplicationInsights.config.install.xdt - - module.IncludeDiagnosticSourceActivities.Add("Microsoft.Azure.ServiceBus"); - module.IncludeDiagnosticSourceActivities.Add("Microsoft.Azure.EventHubs"); - //.... - - // initialize the module - module.Initialize(configuration); - ``` --* Add common telemetry initializers: -- ```csharp - // ensures proper DependencyTelemetry.Type is set for Azure RESTful API calls - configuration.TelemetryInitializers.Add(new HttpDependenciesParsingTelemetryInitializer()); - ``` - - If you created configuration with a plain `TelemetryConfiguration()` constructor, you need to enable correlation support additionally. *It isn't needed* if you read configuration from a file or used `TelemetryConfiguration.CreateDefault()` or `TelemetryConfiguration.Active`. - - ```csharp - configuration.TelemetryInitializers.Add(new OperationCorrelationTelemetryInitializer()); - ``` --* You might also want to install and initialize the Performance Counter collector module as described at [this website](https://apmtips.com/posts/2017-02-13-enable-application-insights-live-metrics-from-code/). --#### Full example --```csharp -using Microsoft.ApplicationInsights; -using Microsoft.ApplicationInsights.DependencyCollector; -using Microsoft.ApplicationInsights.Extensibility; -using System.Net.Http; -using System.Threading.Tasks; --namespace ConsoleApp -{ - class Program - { - static void Main(string[] args) - { - TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault(); -- configuration.ConnectionString = "removed"; - configuration.TelemetryInitializers.Add(new HttpDependenciesParsingTelemetryInitializer()); -- var telemetryClient = new TelemetryClient(configuration); - using (InitializeDependencyTracking(configuration)) - { - // run app... -- telemetryClient.TrackTrace("Hello World!"); -- using (var httpClient = new HttpClient()) - { - // Http dependency is automatically tracked! - httpClient.GetAsync("https://microsoft.com").Wait(); - } -- } -- // before exit, flush the remaining data - telemetryClient.Flush(); -- // Console apps should use the WorkerService package. - // This uses ServerTelemetryChannel which does not have synchronous flushing. - // For this reason we add a short 5s delay in this sample. - - Task.Delay(5000).Wait(); -- // If you're using InMemoryChannel, Flush() is synchronous and the short delay is not required. -- } -- static DependencyTrackingTelemetryModule InitializeDependencyTracking(TelemetryConfiguration configuration) - { - var module = new DependencyTrackingTelemetryModule(); -- // prevent Correlation Id to be sent to certain endpoints. You may add other domains as needed. - module.ExcludeComponentCorrelationHttpHeadersOnDomains.Add("core.windows.net"); - module.ExcludeComponentCorrelationHttpHeadersOnDomains.Add("core.chinacloudapi.cn"); - module.ExcludeComponentCorrelationHttpHeadersOnDomains.Add("core.cloudapi.de"); - module.ExcludeComponentCorrelationHttpHeadersOnDomains.Add("core.usgovcloudapi.net"); - module.ExcludeComponentCorrelationHttpHeadersOnDomains.Add("localhost"); - module.ExcludeComponentCorrelationHttpHeadersOnDomains.Add("127.0.0.1"); -- // enable known dependency tracking, note that in future versions, we will extend this list. - // please check default settings in https://github.com/microsoft/ApplicationInsights-dotnet-server/blob/develop/WEB/Src/DependencyCollector/DependencyCollector/ApplicationInsights.config.install.xdt -- module.IncludeDiagnosticSourceActivities.Add("Microsoft.Azure.ServiceBus"); - module.IncludeDiagnosticSourceActivities.Add("Microsoft.Azure.EventHubs"); -- // initialize the module - module.Initialize(configuration); -- return module; - } - } -} --``` --## Next steps --* [Monitor dependencies](./asp-net-dependencies.md) to see if REST, SQL, or other external resources are slowing you down. -* [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a more detailed view of your app's performance and usage. |
azure-monitor | Diagnostic Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md | Title: Use Search in Azure Application Insights | Microsoft Docs description: Search and filter raw telemetry sent by your web app. Previously updated : 07/30/2019 Last updated : 03/22/2023 |
azure-monitor | Opencensus Python Dependency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-dependency.md | Title: Dependency Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs description: Monitor dependency calls for your Python apps via OpenCensus Python. Previously updated : 8/19/2022 Last updated : 03/22/2023 ms.devlang: python |
azure-monitor | Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md | Title: Automate Application Insights with PowerShell | Microsoft Docs description: Automate creating and managing resources, alerts, and availability tests in PowerShell by using an Azure Resource Manager template. Previously updated : 05/02/2020 Last updated : 03/22/2023 |
azure-monitor | Sla Report | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sla-report.md | Title: Downtime, SLA, and outages workbook - Application Insights description: Calculate and report SLA for web test through a single pane of glass across your Application Insights resources and Azure subscriptions. Previously updated : 05/4/2021 Last updated : 03/22/2023 ms.reviwer: casocha |
azure-monitor | Standard Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/standard-metrics.md | Title: Azure Application Insights standard metrics | Microsoft Docs description: This article lists Azure Application Insights metrics with supported aggregations and dimensions. Previously updated : 07/03/2019 Last updated : 03/22/2023 |
azure-monitor | Tutorial Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md | description: Application Insights SDK tutorial to monitor ASP.NET Core web appli ms.devlang: csharp Previously updated : 11/15/2022 Last updated : 03/22/2023 |
azure-vmware | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-customer-managed-keys.md | Before you begin to enable customer-managed key (CMK) functionality, ensure the privateCloudId=$(az vmware private-cloud show --name $privateCloudName --resource-group $resourceGroupName --query id | tr -d '"') ``` - To configure the system-assigned identity on Azure VMware Solution private cloud with Azure CLI, call [az-resource-update](/cli/azure/resource?view=azure-cli-latest#az-resource-update) and provide the variable for the private cloud resource ID that you previously retrieved. + To configure the system-assigned identity on Azure VMware Solution private cloud with Azure CLI, call [az-resource-update](/cli/azure/resource?view=azure-cli-latest#az-resource-update&preserve-view=true) and provide the variable for the private cloud resource ID that you previously retrieved. ```azurecli-interactive az resource update --ids $privateCloudId --set identity.type=SystemAssigned --api-version "2021-12-01" Before you begin to enable customer-managed key (CMK) functionality, ensure the 1. Navigate to **Key vaults** and locate the key vault you want to use. 1. From the left navigation, under **Settings**, select **Access policies**. 1. In **Access policies**, select **Add Access Policy**.- 1. From the Key Permissions drop-down, check **Select all**, **Unwrap Key**, and **Wrap Key**. + 1. From the Key Permissions drop-down, check: **Select all**, **Get**, **List**, **Wrap Key**, and **Unwrap Key**. 1. Under Select principal, select **None selected**. A new **Principal** window with a search box will open. 1. In the search box, paste the **Object ID** from the previous step, or search the private cloud name you want to use. Choose **Select** when you're done. 1. Select **ADD**. Navigate to your **Azure Key Vault** and provide access to the SDDC on Azure Key # [Azure CLI](#tab/azure-cli) -To configure customer-managed keys for an Azure VMware Solution private cloud with automatic updating of the key version, call [az vmware private-cloud add-cmk-encryption](/cli/azure/vmware/private-cloud?view=azure-cli-latest#az-vmware-private-cloud-add-cmk-encryption). Get the key vault URL and save it to a variable. You'll need this value in the next step to enable CMK. +To configure customer-managed keys for an Azure VMware Solution private cloud with automatic updating of the key version, call [az vmware private-cloud add-cmk-encryption](/cli/azure/vmware/private-cloud?view=azure-cli-latest#az-vmware-private-cloud-add-cmk-encryption&preserve-view=true). Get the key vault URL and save it to a variable. You'll need this value in the next step to enable CMK. ```azurecli-interactive keyVaultUrl =$(az keyvault show --name <keyvault_name> --resource-group <resource_group_name> --query properties.vaultUri --output tsv) |
backup | Blob Backup Configure Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-manage.md | For more information about the supported scenarios, limitations, and availabilit - Vaulted backup of blobs is a managed offsite backup solution that transfers data to the backup vault and retains as per the retention configured in the backup policy. You can retain data for a maximum of *10 years*. - Currently, you can use the vaulted backup solution to restore data to a different storage account only. While performing restores, ensure that the target storage account doesn't contain any *containers* with the same name as those backed up in a recovery point. If any conflicts arise due to the same name of containers, the restore operation fails.-- Ensure the storage accounts that need to be backed up have cross-tenant replication enabled. You can check this by navigating to the storage account > Object replication > Advanced settings. Once here, ensure that the check-box is enabled.+- **Ensure the storage accounts that need to be backed up have cross-tenant replication enabled. You can check this by navigating to the storage account > Object replication > Advanced settings. Once here, ensure that the check-box is enabled.** For more information about the supported scenarios, limitations, and availability, See the [support matrix](blob-backup-support-matrix.md). |
cognitive-services | Concept Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-model-customization.md | In order to train your model effectively, use images with visual variety. Select Additionally, make sure all of your training images meet the following criteria: -- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format-- The file size of the image must be less than 20 megabytes (MB)-- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels+- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format. +- The file size of the image must be less than 20 megabytes (MB). +- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels. ### COCO file |
cognitive-services | Coco Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/coco-verification.md | + + Title: Verify a COCO annotation file ++description: Use a Python script to verify your COCO file for custom model training. +++++ Last updated : 03/21/2023++++# Check the format of your COCO annotation file ++<!-- nbstart https://raw.githubusercontent.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/main/docs/check_coco_annotation.ipynb --> ++> [!TIP] +> Contents of _check_coco_annotation.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/check_coco_annotation.ipynb)**. ++This notebook demonstrates how to check if the format of your annotation file is correct. First, install the python samples package from the command line: ++```python +pip install cognitive-service-vision-model-customization-python-samples +``` ++Then, run the following python code to check the file's format. You can either enter this code in a Python script, or run the [Jupyter Notebook](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/check_coco_annotation.ipynb) on a compatible platform. ++```python +from cognitive_service_vision_model_customization_python_samples import check_coco_annotation_file, AnnotationKind, Purpose +import pathlib +import json ++coco_file_path = pathlib.Path("{your_coco_file_path}") +annotation_kind = AnnotationKind.MULTICLASS_CLASSIFICATION # or AnnotationKind.OBJECT_DETECTION +purpose = Purpose.TRAINING # or Purpose.EVALUATION ++check_coco_annotation_file(json.loads(coco_file_path.read_text()), annotation_kind, purpose) +``` ++<!-- nbend --> ++## Use COCO file in a new project ++Once your COCO file is verified, you're ready to import it to your model customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting/importing a COCO file—you can follow the guide from there to the end. ++## Next steps ++* [Create and train a custom model](model-customization.md) |
cognitive-services | Migrate From Custom Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-from-custom-vision.md | -This guide uses a Python script to take all of the training data from an existing Custom Vision project (images and their label data) and convert it to a COCO file. You can then import the COCO file into Vision Studio to train a custom model. See [Create and train a custom model](model-customization.md) and go to the section on importing a COCO file—you can follow the guide from there to the end. +This guide uses Python code to take all of the training data from an existing Custom Vision project (images and their label data) and convert it to a COCO file. You can then import the COCO file into Vision Studio to train a custom model. See [Create and train a custom model](model-customization.md) and go to the section on importing a COCO file—you can follow the guide from there to the end. ## Prerequisites This guide uses a Python script to take all of the training data from an existin * A Custom Vision resource where an existing project is stored. * An Azure Storage resource - [Create one](../../../storage/common/storage-account-create.md?tabs=azure-portal) +#### [Jupyter Notebook](#tab/notebook) ++This notebook exports your image data and annotations from the workspace of a Custom Vision Service project to your own COCO file in a storage blob, ready for training with Image Analysis Model Customization. You can run the code in this section using a custom Python script, or you can download and run the [Notebook](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/export_cvs_data_to_blob_storage.ipynb) on a compatible platform. ++<!-- nbstart https://raw.githubusercontent.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/main/docs/export_cvs_data_to_blob_storage.ipynb --> ++> [!TIP] +> Contents of _export_cvs_data_to_blob_storage.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/export_cvs_data_to_blob_storage.ipynb)**. +++## Install the python samples package ++Run the following command to install the required python samples package: ++```python +pip install cognitive-service-vision-model-customization-python-samples +``` ++## Authentication ++Next, provide the credentials of your Custom Vision project and your blob storage container. ++You need to fill in the correct parameter values. You need the following information: ++- The name of the Azure Storage account you want to use with your new custom model project +- The key for that storage account +- The name of the container you want to use in that storage account +- Your Custom Vision training key +- Your Custom Vision endpoint URL +- The project ID of your Custom Vision project ++The Azure Storage credentials can be found on that resource's page in the Azure portal. The Custom Vision credentials can be found in the Custom Vision project settings page on the [Custom Vision web portal](https://customvision.ai). +++```python +azure_storage_account_name = '' +azure_storage_account_key = '' +azure_storage_container_name = '' ++custom_vision_training_key = '' +custom_vision_endpoint = '' +custom_vision_project_id = '' +``` ++## Run the migration ++When you run the migration code, the Custom Vision training images will be saved to a `{project_name}_{project_id}/images` folder in your specified Azure blob storage container, and the COCO file will be saved to `{project_name}_{project_id}/train.json` in that same container. Both tagged and untagged images will be exported, including any **Negative**-tagged images. ++> [!IMPORTANT] +> Image Analysis Model Customization does not currently support **multilabel** classification training, buy you can still export data from a Custom Vision multilabel classification project. ++```python +from cognitive_service_vision_model_customization_python_samples import export_data +import logging +logging.getLogger().setLevel(logging.INFO) +logging.getLogger('azure.core.pipeline.policies.http_logging_policy').setLevel(logging.WARNING) ++n_process = 8 +export_data(azure_storage_account_name, azure_storage_account_key, azure_storage_container_name, custom_vision_endpoint, custom_vision_training_key, custom_vision_project_id, n_process) +``` ++<!-- nbend --> ++#### [Python](#tab/python) + ## Install libraries This script requires certain Python libraries. Install them in your project directory with the following command. You need to fill in the correct parameter values. You need the following informa - The key for that storage account - The name of the container you want to use in that storage account ++ ## Use COCO file in a new project -The script generates a COCO file and uploads it to the blob storage location you specified. You can now import it to your model customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting a COCO file—you can follow the guide from there to the end. +The script generates a COCO file and uploads it to the blob storage location you specified. You can now import it to your Model Customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting/importing a COCO file—you can follow the guide from there to the end. ## Next steps -* [Create and train a custom model](model-customization.md) +* [Create and train a custom model](model-customization.md) |
cognitive-services | Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/model-customization.md | This guide shows you how to create and train a custom image classification model ## Prerequisites * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. If you're following this guide using Vision Studio, you must create your resource in the East US region. After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on. -* An Azure Storage resource - [Create one](../../../storage/common/storage-account-create.md?tabs=azure-portal) +* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. If you're following this guide using Vision Studio, you must create your resource in the East US region. If you're using the Python library, you can create it in the East US, West US 2, or West Europe region. After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on. +* An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal) * A set of images with which to train your classification model. You can use the set of [sample images on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images). Or, you can use your own images. You only need about 3-5 images per class. > [!NOTE] > We do not recommend you use custom models for business critical environments due to potential high latency. When customers train custom models in Vision Studio, those custom models belong to the Computer Vision resource that they were trained under and the customer is able to make calls to those models using the **Analyze Image** API. When they make these calls, the custom model is loaded in memory and the prediction infrastructure is initialized. While this happens, customers might experience longer than expected latency to receive prediction results. +#### [Python](#tab/python) ++Train your own image classifier (IC) or object detector (OD) with your own data using Image Analysis model customization and Python. ++You can run through all of the model customization steps using a Python sample package. You can run the code in this section using a Python script, or you can download and run the Notebook on a compatible platform. ++<!-- nbstart https://raw.githubusercontent.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/main/docs/cognitive_service_vision_model_customization.ipynb --> ++> [!TIP] +> Contents of _cognitive_service_vision_model_customization.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/cognitive_service_vision_model_customization.ipynb)**. ++## Install the python samples package ++Install the [sample code](https://pypi.org/project/cognitive-service-vision-model-customization-python-samples/) to train/predict custom models with Python: ++```bash +pip install cognitive-service-vision-model-customization-python-samples +``` ++## Authentication ++Enter your Computer Vision endpoint URL, key, and the name of the resource, into the code below. ++```python +# Resource and key +import logging +logging.getLogger().setLevel(logging.INFO) +from cognitive_service_vision_model_customization_python_samples import ResourceType ++resource_type = ResourceType.SINGLE_SERVICE_RESOURCE # or ResourceType.MULTI_SERVICE_RESOURCE ++resource_name = None +multi_service_endpoint = None ++if resource_type == ResourceType.SINGLE_SERVICE_RESOURCE: + resource_name = '{specify_your_resource_name}' + assert resource_name +else: + multi_service_endpoint = '{specify_your_service_endpoint}' + assert multi_service_endpoint ++resource_key = '{specify_your_resource_key}' +``` ++## Prepare a dataset from Azure blob storage ++To train a model with your own dataset, the dataset should be arranged in the COCO format described below, hosted on Azure blob storage, and accessible from your Computer Vision resource. ++### Dataset annotation format ++Image Analysis uses the COCO file format for indexing/organizing the training images and their annotations. Below are examples and explanations of what specific format is needed for multiclass classification and object detection. ++Image Analysis model customization for classification is different from other kinds of vision training, as we utilize your class names, as well as image data, in training. So, be sure provide meaningful category names in the annotations. ++> [!NOTE] +> In the example dataset, there are few images for the sake of simplicity. Although [Florence models](https://www.microsoft.com/research/publication/florence-a-new-foundation-model-for-computer-vision/) achieve great few-shot performance (high model quality even with little data available), it's good to have more data for the model to learn. Our recommendation is to have at least five images per class, and the more the better. ++Once your COCO annotation file is prepared, you can use the [COCO file verification script](coco-verification.md) to check the format. ++#### Multiclass classification example ++```json +{ + "images": [{"id": 1, "width": 224.0, "height": 224.0, "file_name": "images/siberian-kitten.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/siberian-kitten.jpg"}, + {"id": 2, "width": 224.0, "height": 224.0, "file_name": "images/kitten-3.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/kitten-3.jpg"}], + "annotations": [ + {"id": 1, "category_id": 1, "image_id": 1}, + {"id": 2, "category_id": 1, "image_id": 2}, + ], + "categories": [{"id": 1, "name": "cat"}, {"id": 2, "name": "dog"}] +} +``` ++Besides `absolute_url`, you can also use `coco_url` (the system accepts either field name). ++#### Object detection example ++```json +{ + "images": [{"id": 1, "width": 224.0, "height": 224.0, "file_name": "images/siberian-kitten.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/siberian-kitten.jpg"}, + {"id": 2, "width": 224.0, "height": 224.0, "file_name": "images/kitten-3.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/kitten-3.jpg"}], + "annotations": [ + {"id": 1, "category_id": 1, "image_id": 1, "bbox": [0.1, 0.1, 0.3, 0.3]}, + {"id": 2, "category_id": 1, "image_id": 2, "bbox": [0.3, 0.3, 0.6, 0.6]}, + {"id": 3, "category_id": 2, "image_id": 2, "bbox": [0.2, 0.2, 0.7, 0.7]} + ], + "categories": [{"id": 1, "name": "cat"}, {"id": 2, "name": "dog"}] +} +``` ++The values in `bbox: [left, top, width, height]` are relative to the image width and height. ++### Blob storage directory structure ++Following the examples above, the data directory in your Azure Blob Container `https://{your_blob}.blob.core.windows.net/datasets/` should be arranged like below, where `train_coco.json` is the annotation file. ++``` +cat_dog/ + images/ + 1.jpg + 2.jpg + train_coco.json +``` ++> [!TIP] +> Quota limit information, including the maximum number of images and categories supported, maximum image size, and so on, can be found on the [concept page](../concept-model-customization.md). ++### Grant Computer Vision access to your Azure data blob ++You need to take an extra step to give your Computer Vision resource access to read the contents of your Azure blog storage container. There are two ways to do this. ++#### Option 1: Shared access signature (SAS) ++You can generate a SAS token with at least `read` permission on your Azure Blob Container. This is the option used in the code below. For instructions on acquiring a SAS token, see [Create SAS tokens](/azure/cognitive-services/translator/document-translation/how-to-guides/create-sas-tokens?tabs=Containers). ++#### Option 2: Managed Identity or public accessible ++You can also use [Managed Identity](/azure/active-directory/managed-identities-azure-resources/overview) to grant access. ++Below is a series of steps for allowing the system-assigned Managed Identity of your Computer Vision resource to access your blob storage. In the Azure portal: ++1. Go to the **Identity / System assigned** tab of your Computer Vision resource, and change the **Status** to **On**. +1. Go to the **Access Control (IAM) / Role assignment** tab of your blob storage resource, select **Add / Add role assignment**, and choose either **Storage Blob Data Contributor** or **Storage Blob Data Reader**. +1. Select **Next**, and choose **Managed Identity** under **Assign access to**, and then select **Select members**. +1. Choose your subscription, with the Managed Identity being Computer Vision, and look up the one that matches your Computer Vision resource name. ++### Register the dataset ++Once your dataset has been prepared and hosted on your Azure blob storage container, with access granted to your Computer Vision resource, you can register it with the service. ++> [!NOTE] +> The service only accesses your storage data during training. It doesn't keep copies of your data beyond the training cycle. ++```python +from cognitive_service_vision_model_customization_python_samples import DatasetClient, Dataset, AnnotationKind, AuthenticationKind, Authentication ++dataset_name = '{specify_your_dataset_name}' +auth_kind = AuthenticationKind.SAS # or AuthenticationKind.MI ++dataset_client = DatasetClient(resource_type, resource_name, multi_service_endpoint, resource_key) +annotation_file_uris = ['{specify_your_annotation_uri}'] # example: https://example_data.blob.core.windows.net/datasets/cat_dog/train_coco.json +# register dataset +if auth_kind == AuthenticationKind.SAS: + # option 1: sas + sas_auth = Authentication(AuthenticationKind.SAS, '{your_sas_token}') # note the token/query string is needed, not the full url + dataset = Dataset(name=dataset_name, + annotation_kind=AnnotationKind.MULTICLASS_CLASSIFICATION, # checkout AnnotationKind for all annotation kinds + annotation_file_uris=annotation_file_uris, + authentication=sas_auth) +else: + # option 2: managed identity or public accessible. make sure your storage is accessible via the managed identiy, if it is not public accessible + dataset = Dataset(name=dataset_name, + annotation_kind=AnnotationKind.MULTICLASS_CLASSIFICATION, # checkout AnnotationKind for all annotation kinds + annotation_file_uris=annotation_file_uris) ++reg_dataset = dataset_client.register_dataset(dataset) +logging.info(f'Register dataset: {reg_dataset.__dict__}') ++# specify your evaluation dataset here, you can follow the same registeration process as the training dataset +eval_dataset = None +if eval_dataset: + reg_eval_dataset = dataset_client.register_dataset(eval_dataset) + logging.info(f'Register eval dataset: {reg_eval_dataset.__dict__}') +``` ++## Train a model ++After you register the dataset, use it to train a custom model: ++```python +from cognitive_service_vision_model_customization_python_samples import TrainingClient, Model, ModelKind, TrainingParameters, EvaluationParameters ++model_name = '{specify_your_model_name}' ++training_client = TrainingClient(resource_type, resource_name, multi_service_endpoint, resource_key) +train_params = TrainingParameters(training_dataset_name=dataset_name, time_budget_in_hours=1, model_kind=ModelKind.GENERIC_IC) # checkout ModelKind for all valid model kinds +eval_params = EvaluationParameters(test_dataset_name=eval_dataset.name) if eval_dataset else None +model = Model(model_name, train_params, eval_params) +model = training_client.train_model(model) +logging.info(f'Start training: {model.__dict__}') +``` ++## Check the training status ++Use the following code to check the status of the asynchronous training operation. ++```python +from cognitive_service_vision_model_customization_python_samples import TrainingClient ++training_client = TrainingClient(resource_type, resource_name, multi_service_endpoint, resource_key) +model = training_client.wait_for_completion(model_name, 30) +``` ++## Predict with a sample image ++Use the following code to get a prediction with a new sample image. ++```python +from cognitive_service_vision_model_customization_python_samples import PredictionClient +prediction_client = PredictionClient(resource_type, resource_name, multi_service_endpoint, resource_key) ++with open('path_to_your_test_image.png', 'rb') as f: + img = f.read() ++prediction = prediction_client.predict(model_name, img, content_type='image/png') +logging.info(f'Prediction: {prediction}') +``` ++<!-- nbend --> ++ #### [Vision Studio](#tab/studio) ## Create a new custom model -Begin by going to [Vision Studio](https://portal.vision.cognitive.azure.com/) and selecting the **Image analysis** tab. Then select either the **Extract common tags from images** tile for image classification or the **Extract common objects in images** tile for object detection. This guide will demonstrate a custom image classification model. +Begin by going to [Vision Studio](https://portal.vision.cognitive.azure.com/) and selecting the **Image analysis** tab. Then select either the **Extract common tags from images** tile for image classification or the **Extract common objects in images** tile for object detection. This guide demonstrates a custom image classification model. > [!IMPORTANT] > To train a custom model in Vision Studio, your Azure subscription needs to be approved for access. Please request access using [this form](https://aka.ms/visionaipublicpreview). Then, select the container from the Azure Blob Storage account where you stored You need a COCO file to convey the labeling information. An easy way to generate a COCO file is to create an Azure Machine Learning project, which comes with a data-labeling workflow. -In the dataset details page, select **Add a new Data Labeling project**. Name it and select **Create a new workspace**. That will open a new Azure portal tab where you can create the Azure Machine Learning project. +In the dataset details page, select **Add a new Data Labeling project**. Name it and select **Create a new workspace**. That opens a new Azure portal tab where you can create the Azure Machine Learning project.  Once you've added all the class labels, save them, select **start** on the proje Choose **Start labeling** and follow the prompts to label all of your images. When you're finished, return to the Vision Studio tab in your browser. -Now select **Add COCO file**, then select **Import COCO file from an Azure ML Data Labeling project**. This will import the labeled data from Azure Machine Learning. +Now select **Add COCO file**, then select **Import COCO file from an Azure ML Data Labeling project**. This imports the labeled data from Azure Machine Learning. The COCO file you just created is now stored in the Azure Storage container that you linked to this project. You can now import it into the model customization workflow. Select it from the drop-down list. Once the COCO file is imported into the dataset, the dataset can be used for training a model. Then select a time budget and train the model. For small examples, you can use a  -It may take some time for the training to complete. Image Analysis 4.0 models can be very accurate with only a small set of training data, but they take longer to train than previous models. +It may take some time for the training to complete. Image Analysis 4.0 models can be accurate with only a small set of training data, but they take longer to train than previous models. ## Evaluate the trained model After training has completed, you can view the model's performance evaluation. T - Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5 - Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75 -If an evaluation set is not provided when training the model, the reported performance is estimated based on part of the training set. We strongly recommend you use an evaluation dataset (using the same process as above) to have a reliable estimation of your model performance. +If an evaluation set isn't provided when training the model, the reported performance is estimated based on part of the training set. We strongly recommend you use an evaluation dataset (using the same process as above) to have a reliable estimation of your model performance.  Once you've built a custom model, you can go back to the **Extract common tags f  -The prediction results will appear in the right column. +The prediction results appear in the right column. #### [REST API](#tab/rest) |
cognitive-services | How To Get Speech Session Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-get-speech-session-id.md | To get the Session ID, when using SDK you need to: 1. Enable application logging. 1. Find the Session ID inside the log. -If you use [Speech CLI](spx-overview.md), you can also get the Session ID interactively. See details [below](#get-session-id-using-speech-cli). +If you use Speech SDK for JavaScript, get the Session ID as described in [this section](#get-session-id-using-javascript). -In case of [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) you need to "inject" the session information in the requests. See details [below](#provide-session-id-using-rest-api-for-short-audio). +If you use [Speech CLI](spx-overview.md), you can also get the Session ID interactively. See details in [this section](#get-session-id-using-speech-cli). ++In case of [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) you need to "inject" the session information in the requests. See details in [this section](#provide-session-id-using-rest-api-for-short-audio). ### Enable logging in the Speech SDK Enable logging for your application as described in [this article](how-to-use-lo ### Get Session ID from the log -Open the log file your application produced and look for `SessionId:`. The number, that would follow is the Session ID you need. In the log excerpt example below `0b734c41faf8430380d493127bd44631` is the Session ID. +Open the log file your application produced and look for `SessionId:`. The number that would follow is the Session ID you need. In the following log excerpt example `0b734c41faf8430380d493127bd44631` is the Session ID. ``` [874193]: 218ms SPX_DBG_TRACE_VERBOSE: audio_stream_session.cpp:1238 [0000023981752A40]CSpxAudioStreamSession::FireSessionStartedEvent: Firing SessionStarted event: SessionId: 0b734c41faf8430380d493127bd44631 ```+### Get Session ID using JavaScript ++If you use Speech SDK for JavaScript, you get Session ID with the help of `sessionStarted` event from the [Recognizer class](/javascript/api/microsoft-cognitiveservices-speech-sdk/recognizer). ++See an example of getting Session ID using JavaScript in [this sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/browser/https://docsupdatetracker.net/index.html). Look for `recognizer.sessionStarted = onSessionStarted;` and then for `function onSessionStarted`. ### Get Session ID using Speech CLI -If you use [Speech CLI](spx-overview.md), then you will see the Session ID in `SESSION STARTED` and `SESSION STOPPED` console messages. +If you use [Speech CLI](spx-overview.md), then you'll see the Session ID in `SESSION STARTED` and `SESSION STOPPED` console messages. -You can also enable logging for your sessions and get the Session ID from the log file as described above. Run the appropriate Speech CLI command to get the information on using logs: +You can also enable logging for your sessions and get the Session ID from the log file as described in [this section](#get-session-id-from-the-log). Run the appropriate Speech CLI command to get the information on using logs: ```console spx help recognize log |
cognitive-services | Create Translator Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/create-translator-resource.md | All Cognitive Services API requests require an endpoint URL and a read-only key 1. In the left rail, under *Resource Management*, select **Keys and Endpoint**. 1. Copy and paste your keys and endpoint URL in a convenient location, such as *Microsoft Notepad*. ## How to delete a resource or resource group -> [!Warning] +> [!WARNING] +> > Deleting a resource group also deletes all resources contained in the group. To remove a Cognitive Services or Translator resource, you can **delete the resource** or **delete the resource group**. |
cognitive-services | Use Client Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/use-client-sdks.md | -[Document Translation](../overview.md) is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service. You can translate entire documents or process batch document translations in various file formats while preserving original document structure and format. In this article, you'll learn how to use the Document Translation service C#/.NET and Python client libraries. For the REST API, see our [Quickstart](../quickstarts/get-started-with-rest-api.md) guide. +[Document Translation](../overview.md) is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service. You can translate entire documents or process batch document translations in various file formats while preserving original document structure and format. In this article, you learn how to use the Document Translation service C#/.NET and Python client libraries. For the REST API, see our [Quickstart](../quickstarts/get-started-with-rest-api.md) guide. ## Prerequisites -To get started, you'll need: +To get started, you need: * An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). To get started, you'll need: * An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your Azure blob storage account for your source and target files: * **Source container**. This container is where you upload your files for translation (required).- * **Target container**. This container is where your translated files will be stored (required). + * **Target container**. This container is where your translated files are stored (required). * You also need to create Shared Access Signature (SAS) tokens for your source and target containers. The `sourceUrl` and `targetUrl` , must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](create-sas-tokens.md). using System; using System.Threading; ``` -In the application's **Program** class, create variables for your key and custom endpoint. For details, *see* [Custom domain name and key](../quickstarts/get-started-with-rest-api.md#your-custom-domain-name-and-key) +In the application's **Program** class, create variables for your key and custom endpoint. For more information, *see* [Retrieve your key and custom domain endpoint](../quickstarts/get-started-with-rest-api.md#retrieve-your-key-and-document-translation-endpoint). ```csharp private static readonly string endpoint = "<your custom endpoint>"; private static readonly string key = "<your key>"; ### Translate a document or batch files -* To Start a translation operation for one or more documents in a single blob container, you'll call the `StartTranslationAsync` method. +* To Start a translation operation for one or more documents in a single blob container, you call the `StartTranslationAsync` method. * To call `StartTranslationAsync`, you need to initialize a `DocumentTranslationInput` object that contains the following parameters: * **sourceUri**. The SAS URI for the source container containing documents to be translated.-* **targetUri** The SAS URI for the target container to which the translated documents will be written. +* **targetUri** The SAS URI for the target container to which the translated documents are written. * **targetLanguageCode**. The language code for the translated documents. You can find language codes on our [Language support](../../language-support.md) page. ```csharp from azure.core.credentials import AzureKeyCredential from azure.ai.translation.document import DocumentTranslationClient ``` -Create variables for your resource key, custom endpoint, sourceUrl, and targetUrl. For -more information, *see* [Custom domain name and key](../quickstarts/get-started-with-rest-api.md#your-custom-domain-name-and-key) +Create variables for your resource key, custom endpoint, sourceUrl, and targetUrl. For more information, *see* [Retrieve your key and custom domain endpoint](../quickstarts/get-started-with-rest-api.md#retrieve-your-key-and-document-translation-endpoint). ```python key = "<your-key>" |
cognitive-services | Use Rest Api Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/use-rest-api-programmatically.md | To get started, you need: * An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). -* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You create containers to store and organize your blob data within your storage account. +* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure blob storage account for your source and target files: ++ * **Source container**. This container is where you upload your files for translation (required). + * **Target container**. This container is where your translated files are stored (required). * A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource): To get started, you need: 1. After your resource has successfully deployed, select **Go to resource**. -## Your custom domain name and key --> [!IMPORTANT] -> -> * **All API requests to the Document Translation service require a custom domain endpoint**. -> * You won't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation. +### Retrieve your key and custom domain endpoint -### What is the custom domain endpoint? +*Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal. -The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories: +1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page. -```http -https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0 -``` +1. In the left rail, under *Resource Management*, select **Keys and Endpoint**. -### Find your custom domain name +1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call. -The **NAME-OF-YOUR-RESOURCE** (also called *custom domain name*) parameter is the value that you entered in the **Name** field when you created your Translator resource. +1. You **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service. + :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal."::: ### Get your key |
cognitive-services | Get Started With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/get-started-with-rest-api.md | To get started, you need: * An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). -* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to create containers to store and organize your blob data within your storage account. +* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure blob storage account for your source and target files: ++ * **Source container**. This container is where you upload your files for translation (required). + * **Target container**. This container is where your translated files are stored (required). * A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource): To get started, you need: 1. After your resource has successfully deployed, select **Go to resource**. -## Your custom domain name and key --The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal. --> [!IMPORTANT] -> -> * **All API requests to the Document Translation service require a custom domain endpoint**. -> * Don't use the Text Translation endpoint found on your Azure portal resource *Keys and Endpoint* page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation. - > [!div class="nextstepaction"] > [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites) -### Retrieve your key and endpoint +### Retrieve your key and document translation endpoint -Requests to the Translator service require a read-only key and custom endpoint to authenticate access. +*Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal. 1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page. Requests to the Translator service require a read-only key and custom endpoint t 1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call. -1. You **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service. +1. You paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service. :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal."::: |
cognitive-services | Tag Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/tag-data.md | As you label your data, keep in mind: * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels. * **Label consistently**: The same entity should have the same label across all the documents.- * **Label completely**: Label all the instances of the entity in all your documents. You can use the [auto-labeling feature](use-autotagging.md) to ensure complete labeling. + * **Label completely**: Label all the instances of the entity in all your documents. You can use the [auto labelling feature](use-autolabeling.md) to ensure complete labeling. > [!NOTE] > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type. |
cognitive-services | Use Autolabeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/use-autolabeling.md | + + Title: How to use autolabeling in custom named entity recognition ++description: Learn how to use autolabeling in custom named entity recognition. +++++++ Last updated : 03/20/2023++++# How to use autolabeling for Custom Named Entity Recognition ++[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires both time and effort, you can use the autolabeling feature to automatically label your entities. You can start autolabeling jobs based on a model you've previously trained or using GPT models. With autolabeling based on a model you've previously trained, you can start labeling a few of your documents, train a model, then create an autolabeling job to produce entity labels for other documents based on that model. With autolabeling with GPT, you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your entities. ++## Prerequisites ++### [Autolabel based on a model you've trained](#tab/autolabel-model) ++Before you can use autolabeling based on a model you've trained, you need: +* A successfully [created project](create-project.md) with a configured Azure blob storage account. +* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. +* [Labeled data](tag-data.md) +* A [successfully trained model](train-model.md) +++### [Autolabel with GPT](#tab/autolabel-gpt) +Before you can use autolabeling with GPT, you need: +* A successfully [created project](create-project.md) with a configured Azure blob storage account. +* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. +* Entity names that are meaningful. The GPT models label entities in your documents based on the name of the entity you've provided. +* [Labeled data](tag-data.md) isn't required. +* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md). ++++## Trigger an autolabeling job ++### [Autolabel based on a model you've trained](#tab/autolabel-model) ++When you trigger an autolabeling job based on a model you've trained, there's a monthly limit of 5,000 text records per month, per resource. This means the same limit applies on all projects within the same resource. ++> [!TIP] +> A text record is calculated as the ceiling of (Number of characters in a document / 1,000). For example, if a document has 8921 characters, the number of text records is: +> +> `ceil(8921/1000) = ceil(8.921)`, which is 9 text records. ++1. From the left navigation menu, select **Data labeling**. +2. Select the **Autolabel** button under the Activity pane to the right of the page. +++ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job." lightbox="../media/trigger-autotag.png"::: + +3. Choose Autolabel based on a model you've trained and click on Next. ++ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: + +4. Choose a trained model. It's recommended to check the model performance before using it for autolabeling. ++ :::image type="content" source="../media/choose-model-trained.png" alt-text="A screenshot showing how to choose trained model for autotagging." lightbox="../media/choose-model-trained.png"::: ++5. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. You can see the total labels, precision and recall of each entity. It's recommended to include entities that perform well to ensure the quality of the automatically labeled entities. ++ :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png"::: ++6. Choose the documents you want to be automatically labeled. The number of text records of each document is displayed. When you select one or more documents, you should see the number of texts records selected. It's recommended to choose the unlabeled documents from the filter. ++ > [!NOTE] + > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible. + > * You can view the documents by clicking on the document name. + + :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: ++7. Select **Autolabel** to trigger the autolabeling job. +You should see the model used, number of documents included in the autolabeling job, number of text records and entities to be automatically labeled. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. ++ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: ++### [Autolabel with GPT](#tab/autolabel-gpt) ++When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models. ++1. From the left navigation menu, select **Data labeling**. +2. Select the **Autolabel** button under the Activity pane to the right of the page. ++ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png"::: ++4. Choose Autolabel with GPT and click on Next. ++ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: ++5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed. ++ :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png"::: + +6. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. Having descriptive names for labels, and including examples for each label is recommended to achieve good quality labeling with GPT. ++ :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png"::: + +7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter. ++ > [!NOTE] + > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible. + > * You can view the documents by clicking on the document name. + + :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: ++8. Select **Start job** to trigger the autolabeling job. +You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. ++ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: +++++## Review the auto labeled documents ++When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied. +++Entities that have been automatically labeled appear with a dotted line. These entities have two selectors (a checkmark and an "X") that allow you to accept or reject the automatic label. ++Once an entity is accepted, the dotted line changes to a solid one, and the label is included in any further model training becoming a user defined label. ++Alternatively, you can accept or reject all automatically labeled entities within the document, using **Accept all** or **Reject all** in the top right corner of the screen. ++After you accept or reject the labeled entities, select **Save labels** to apply the changes. ++> [!NOTE] +> * We recommend validating automatically labeled entities before accepting them. +> * All labels that were not accepted are be deleted when you train your model. +++## Next steps ++* Learn more about [labeling your data](tag-data.md). |
cognitive-services | Tag Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/tag-data.md | Use the following steps to label your data: + You can also use the [auto labeling feature](use-autolabeling.md) to ensure complete labeling. + 6. In the right side pane under the **Labels** pivot you can find all the classes in your project and the count of labeled instances per each. 7. In the bottom section of the right side pane you can add the current file you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation. |
cognitive-services | Use Autolabeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/use-autolabeling.md | + + Title: How to use autolabeling in custom text classification ++description: Learn how to use autolabeling in custom text classification. +++++++ Last updated : 3/15/2023++++# How to use autolabeling for Custom Text Classification ++[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires much time and effort, you can use the autolabeling feature to automatically label your documents with the classes you want to categorize them into. You can currently start autolabeling jobs based on a model using GPT models where you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your documents. ++## Prerequisites ++Before you can use autolabeling with GPT, you need: +* A successfully [created project](create-project.md) with a configured Azure blob storage account. +* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. +* Class names that are meaningful. The GPT models label documents based on the names of the classes you've provided. +* [Labeled data](tag-data.md) isn't required. +* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md). ++++## Trigger an autolabeling job ++When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models. ++1. From the left navigation menu, select **Data labeling**. +2. Select the **Autolabel** button under the Activity pane to the right of the page. ++ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png"::: ++4. Choose Autolabel with GPT and click on Next. ++ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: ++5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed. ++ :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png"::: + +6. Select the classes you want to be included in the autolabeling job. By default, all classes are selected. Having descriptive names for classes, and including examples for each class is recommended to achieve good quality labeling with GPT. ++ :::image type="content" source="../media/choose-classes.png" alt-text="A screenshot showing which labels to be included in autotag job." lightbox="../media/choose-classes.png"::: + +7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter. ++ > [!NOTE] + > * If a document was automatically labeled, but this label was already user defined, only the user defined label is used. + > * You can view the documents by clicking on the document name. + + :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: ++8. Select **Start job** to trigger the autolabeling job. +You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. ++ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: +++++## Review the auto labeled documents ++When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied. +++Documents that have been automatically classified have suggested labels in the activity pane highlighted in purple. Each suggested label has two selectors (a checkmark and a cancel icon) that allow you to accept or reject the automatic label. ++Once a label is accepted, the purple color changes to the default blue one, and the label is included in any further model training becoming a user defined label. ++After you accept or reject the labels for the autolabeled documents, select **Save labels** to apply the changes. ++> [!NOTE] +> * We recommend validating automatically labeled documents before accepting them. +> * All labels that were not accepted are deleted when you train your model. +++## Next steps ++* Learn more about [labeling your data](tag-data.md). |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | When using our embeddings models, keep in mind their limitations and risks. ### Embeddings Models | Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | | | |-| text-embedding-ada-002 | No | Yes | East US, South Central US, West Europe | N/A |8,192 | Sep 2021 | +| text-embedding-ada-002 | No | Yes | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 | | text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 | | text-similarity-babbage-001 | No | Yes | South Central US, West Europe | N/A | 2,046 | Aug 2020 | | text-similarity-curie-001 | No | Yes | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 | |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md | At Microsoft, we're committed to the advancement of AI driven by principles that How do I get access to Azure OpenAI? -Access is currently limited as we navigate high demand, upcoming product improvements, and <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">MicrosoftΓÇÖs commitment to responsible AI</a>. For now, we're working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations. In addition to applying for initial access, all solutions using Azure OpenAI are required to go through a use case review before they can be released for production use. +Access is currently limited as we navigate high demand, upcoming product improvements, and <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">MicrosoftΓÇÖs commitment to responsible AI</a>. For now, we're working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations. More specific information is included in the application form. We appreciate your patience as we work to responsibly enable broader access to Azure OpenAI. -Apply here for initial access or for a production review: +Apply here for access: <a href="https://aka.ms/oaiapply" target="_blank">Apply now</a> -All solutions using Azure OpenAI are also required to go through a use case review before they can be released for production use, and are evaluated on a case-by-case basis. In general, the more sensitive the scenario the more important risk mitigation measures will be for approval. - ## Comparing Azure OpenAI and OpenAI Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other. |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quickstart.md | -zone_pivot_groups: openai-quickstart ++ Last updated : 03/15/2023+zone_pivot_groups: openai-quickstart-new recommendations: false Use this article to get started making your first calls to Azure OpenAI. ::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python SDK quickstart](includes/python.md)] |
communication-services | Ui Library Cross Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/ui-library-cross-platform.md | Title: Cross Platform development using the UI library -description: Cross Platform development solutions using the UI library to enable .NET MAUI, Xamarin and React Native developers build communication applications +description: Cross Platform development solutions using the UI library to enable .NET MAUI, Xamarin and React Native developers build communication calling mobile applications |
container-registry | Container Registry Repository Scoped Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repository-scoped-permissions.md | Title: Permissions to repositories in Azure Container Registry -description: Create a token with permissions scoped to specific repositories in a Premium registry to pull or push images, or perform other actions +description: Create a token with permissions scoped to specific repositories in a registry to pull or push images, or perform other actions ms.devlang: azurecli # Create a token with repository-scoped permissions -This article describes how to create tokens and scope maps to manage access to specific repositories in your container registry. By creating tokens, a registry owner can provide users or services with scoped, time-limited access to repositories to pull or push images or perform other actions. A token provides more fine-grained permissions than other registry [authentication options](container-registry-authentication.md), which scope permissions to an entire registry. +This article describes how to create tokens and scope maps to manage access to specific repositories in your container registry. By creating tokens, a registry owner can provide users or services with scoped, time-limited access to repositories to pull or push images or perform other actions. A token provides more fine-grained permissions than other registry [authentication options](container-registry-authentication.md), which scope permissions to an entire registry. Scenarios for creating a token include: Scenarios for creating a token include: * Provide an external organization with permissions to a specific repository * Limit repository access to different user groups in your organization. For example, provide write and read access to developers who build images that target specific repositories, and read access to teams that deploy from those repositories. -This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md). +This feature is available in all the service tiers. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md) ## Limitations * You can't currently assign repository-scoped permissions to an Azure Active Directory identity, such as a service principal or managed identity. - ## Concepts -To configure repository-scoped permissions, you create a *token* with an associated *scope map*. +To configure repository-scoped permissions, you create a *token* with an associated *scope map*. * A **token** along with a generated password lets the user authenticate with the registry. You can set an expiration date for a token password, or disable a token at any time. To configure repository-scoped permissions, you create a *token* with an associa |`metadata/read` | Read metadata from the repository | List tags or manifests | |`metadata/write` | Write metadata to the repository | Enable or disable read, write, or delete operations | -* A **scope map** groups the repository permissions you apply to a token, and can reapply to other tokens. Every token is associated with a single scope map. +* A **scope map** groups the repository permissions you apply to a token, and can reapply to other tokens. Every token is associated with a single scope map. With a scope map: - * Configure multiple tokens with identical permissions to a set of repositories - * Update token permissions when you add or remove repository actions in the scope map, or apply a different scope map + * Configure multiple tokens with identical permissions to a set of repositories + * Update token permissions when you add or remove repository actions in the scope map, or apply a different scope map Azure Container Registry also provides several system-defined scope maps you can apply when creating tokens. The permissions of system-defined scope maps apply to all repositories in your registry.The individual *actions* corresponds to the limit of [Repositories per scope map.](container-registry-skus.md) -The following image shows the relationship between tokens and scope maps. +The following image shows the relationship between tokens and scope maps.  The following image shows the relationship between tokens and scope maps. * **Azure CLI** - Azure CLI command examples in this article require Azure CLI version 2.17.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). * **Docker** - To authenticate with the registry to pull or push images, you need a local Docker installation. Docker provides installation instructions for [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms) systems.-* **Container registry** - If you don't have one, create a Premium container registry in your Azure subscription, or upgrade an existing registry. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md). +* **Container registry** - If you don't have one, create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md). ## Create token - CLI After the token is validated and created, token details appear in the **Tokens** ### Add token password -To use a token created in the portal, you must generate a password. You can generate one or two passwords, and set an expiration date for each one. New passwords created for tokens are available immediately. Regenerating new passwords for tokens will take 60 seconds to replicate and be available. +To use a token created in the portal, you must generate a password. You can generate one or two passwords, and set an expiration date for each one. New passwords created for tokens are available immediately. Regenerating new passwords for tokens will take 60 seconds to replicate and be available. 1. In the portal, navigate to your container registry. 1. Under **Repository permissions**, select **Tokens**, and select a token. In the portal, on the **Tokens** screen, select the token, and under **Scope map ## Disable or delete token -You might need to temporarily disable use of the token credentials for a user or service. +You might need to temporarily disable use of the token credentials for a user or service. Using the Azure CLI, run the [az acr token update][az-acr-token-update] command to set the `status` to `disabled`: az acr token update --name MyToken --registry myregistry \ In the portal, select the token in the **Tokens** screen, and select **Disabled** under **Status**. -To delete a token to permanently invalidate access by anyone using its credentials, run the [az acr token delete][az-acr-token-delete] command. +To delete a token to permanently invalidate access by anyone using its credentials, run the [az acr token delete][az-acr-token-delete] command. ```azurecli az acr token delete --name MyToken --registry myregistry |
cosmos-db | Manage Data Java V4 Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-java-v4-sdk.md | In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account, ## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription.-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.+- [Java Development Kit (JDK) 8](/java/openjdk/download#openjdk-8). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. |
cosmos-db | Manage Data Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-java.md | In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account, ## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription.-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.+- [Java Development Kit (JDK) 8](/java/openjdk/download#openjdk-8). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. |
cosmos-db | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-java.md | In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph) ## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.+- [Java Development Kit (JDK) 8](/java/openjdk/download#openjdk-8). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). - [Git](https://www.git-scm.com/downloads). - [Gremlin-driver 3.4.13](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver/3.4.13), this dependency is mentioned in the quickstart sample's pom.xml |
cosmos-db | Merge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md | Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number To get started using partition merge, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Partition merge (preview)** feature. -Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria). Once you've enabled the feature, it will take 15-20 minutes to take effect. +Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria). Once you've enabled the feature, it takes 15-20 minutes to take effect. > [!CAUTION]-> When merge is enabled on an account, only requests from .NET SDK version >= 3.27.0 will be allowed on the account, regardless of whether merges are ongoing or not. Requests from other SDKs (older .NET SDK, Java, JavaScript, Python, Go) or unsupported connectors (Azure Data Factory, Azure Search, Azure Cosmos DB Spark connector, Azure Functions, Azure Stream Analytics, and others) will be blocked and fail. Ensure you have upgraded to a supported SDK version before enabling the feature. After the feature is enabled or disabled, it may take 15-20 minutes to fully propagate to the account. If you plan to disable the feature after you've completed using it, it may take 15-20 minutes before requests from SDKs and connectors that are not supported for merge are allowed. +> When merge is enabled on an account, only requests from .NET SDK version >= 3.27.0 or Java SDK >= 4.42.0 will be allowed on the account, regardless of whether merges are ongoing or not. Requests from other SDKs (older .NET SDK, older Java SDK, any JavaScript SDK, any Python SDK, any Go SDK) or unsupported connectors (Azure Data Factory, Azure Search, Azure Cosmos DB Spark connector, Azure Functions, Azure Stream Analytics, and others) will be blocked and fail. Ensure you have upgraded to a supported SDK version before enabling the feature. After the feature is enabled or disabled, it may take 15-20 minutes to fully propagate to the account. If you plan to disable the feature after you've completed using it, it may take 15-20 minutes before requests from SDKs and connectors that are not supported for merge are allowed. :::image type="content" source="media/merge/merge-feature-blade.png" alt-text="Screenshot of Features pane and Partition merge feature."::: Condition 2 often occurs when you delete/TTL a large volume of data, leaving unu To determine the current RU/s per physical partition, from your Cosmos account, navigate to **Metrics**. Select the metric **Physical Partition Throughput** and filter to your database and container. Apply splitting by **PhysicalPartitionId**. -For containers using autoscale, this metric will show the max RU/s currently provisioned on each physical partition. For containers using manual throughput, this metric will show the manual RU/s on each physical partition. +For containers using autoscale, this metric shows the max RU/s currently provisioned on each physical partition. For containers using manual throughput, this metric shows the manual RU/s on each physical partition. In the below example, we have an autoscale container provisioned with 5000 RU/s (scales between 500 - 5000 RU/s). It has five physical partitions and each physical partition has 1000 RU/s. Based on conditions 1 and 2, our container can potentially benefit from merging ### Merging physical partitions -In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag isn't passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge. +In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB runs a simulation and return the expected result of the merge. This result is returned despite the merge itself not running. When the flag isn't passed in, the merge executes against the resource. When finished, the command outputs the current amount of storage in KB per physical partition post-merge. > [!TIP] > Before running a merge, it's recommended to set your provisioned RU/s (either manual RU/s or autoscale max RU/s) as close as possible to your desired steady state RU/s post-merge, to help ensure the system calculates an efficient partition layout. #### [PowerShell](#tab/azure-powershell) -Use [`Install-Module`](/powershell/module/powershellget/install-module) to install the [Az.CosmosDB](/powershell/module/az.cosmosdb/) module with pre-release features enabled. +Use [`Install-Module`](/powershell/module/powershellget/install-module) to install the [Az.CosmosDB](/powershell/module/az.cosmosdb/) module with prerelease features enabled. ```azurepowershell-interactive $parameters = @{ To enroll in the preview, your Azure Cosmos DB account must meet all the followi - Your Azure Cosmos DB account uses API for NoSQL or MongoDB with version >=3.6. - Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts. - Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).- - However, only the containers with dedicated throughput will be able to be merged. + - However, only the containers with dedicated throughput are able to be merged. - Your Azure Cosmos DB account is a single-write region account (merge isn't currently supported for multi-region write accounts). - Your Azure Cosmos DB account doesn't use any of the following features: - [Point-in-time restore](continuous-backup-restore-introduction.md) - [Customer-managed keys](how-to-setup-cmk.md) - [Analytical store](analytical-store-introduction.md) - Your Azure Cosmos DB account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).-- If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.+- If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET v3 SDK (version 3.27.0 or higher) or Java v4 SDK (version 4.42.0 or higher). When merge preview is enabled on your account, the account doesn't accept requests sent from non .NET/Java SDKs or older .NET/Java SDK versions. - There are no SDK or driver requirements to use the feature with API for MongoDB. - Your Azure Cosmos DB account doesn't use any currently unsupported connectors: - Azure Data Factory To enroll in the preview, your Azure Cosmos DB account must meet all the followi - Azure Functions - Azure Search - Azure Cosmos DB Spark connector- - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET V3 SDK v3.27.0 or higher + - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET v3 SDK >= v3.27.0 or Java v4 SDK >= 4.42.0 ### Account resources and configuration - Merge is only available for API for NoSQL and MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater. - Merge is only available for single-region write accounts. Multi-region write account support isn't available.-- Accounts using merge functionality can't also use these features (if these features are added to a merge enabled account, resources in the account will no longer be able to be merged):+- Accounts using merge functionality can't also use these features (if these features are added to a merge enabled account, the account can't merge resources): - [Point-in-time restore](continuous-backup-restore-introduction.md) - [Customer-managed keys](how-to-setup-cmk.md) - [Analytical store](analytical-store-introduction.md) To enroll in the preview, your Azure Cosmos DB account must meet all the followi ### SDK requirements (API for NoSQL only) -Accounts with the merge feature enabled are supported only when you use the latest version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing. +Accounts with the merge feature enabled are supported only when you use the latest version of the .NET v3 SDK or Java v4 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions aren't accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing. Find the latest version of the supported SDK: | SDK | Supported versions | Package manager link | | | | |-| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> | +| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos> | +| **Java SDK v4** | *>= 4.42.0* | <https://mvnrepository.com/artifact/com.azure/azure-cosmos> | Support for other SDKs is planned for the future. > [!TIP]-> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md). +> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using a legacy SDK, follow the appropriate migration guide: +> +> - Legacy .NET v2 SDK: [.NET SDK v3 migration guide](nosql/migrate-dotnet-v3.md) +> - Legacy Java v3 SDK: [Java SDK v4 migration guide](nosql/migrate-java-v4-sdk.md) +> ### Unsupported connectors -If you enroll in the preview, the following connectors will fail. +If you enroll in the preview, the following connectors fail. - Azure Data Factory ┬╣ - Azure Stream Analytics ┬╣ If you enroll in the preview, the following connectors will fail. - Azure Functions ┬╣ - Azure Search ┬╣ - Azure Cosmos DB Spark connector ┬╣-- Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET V3 SDK v3.27.0 or higher+- Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET v3 SDK >= v3.27.0 or Java v4 SDK >= 4.42.0 ┬╣ Support for these connectors is planned for the future. |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md | Cosmos DB for MongoDB has numerous benefits compared to other MongoDB service of - **Five 9's of availability**: [99.999% availability](../high-availability.md) is easily configurable to ensure your data is always there for you. - **Active-active database**: Unlike MongoDB Atlas, Cosmos DB for MongoDB supports active-active across multiple regions. Databases can span multiple regions, with no single point of failure for **writes and reads for the same data**. MongoDB Atlas global clusters only support active-passive deployments for writes for the same data. +- **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. The Azure Cosmos DB platform can scale in increments as small as 1/100th of a VM due to its architecture. This means that you can scale your database to the exact size you need, without paying for unused resources. -- **Cost efficient, granular, unlimited scalability**: Sharded collections can scale to any size, unlike other MongoDB service offerings. Scaling is done in a cost-efficient manner unlike other MongoDB service offerings. The Azure Cosmos DB platform can scale in increments as small as 1/100th of a VM due to its architecture. This means that you can scale your database to the exact size you need, without paying for unused resources.+- **Real time analytics (HTAP) at any scale**: Run analytics workloads against your transactional MongoDB data in real time with no effect on your database. This analysis is fast and inexpensive, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Easily create Power BI dashboards, integrate with Azure Machine Learning and Azure Cognitive Services, and bring all of your data from your MongoDB workloads into a single data warehousing solution. Learn more about the [Azure Synapse Link](../synapse-link.md). - **Serverless deployments**: Cosmos DB for MongoDB offers a [serverless capacity mode](../serverless.md). With [Serverless](../serverless.md), you're only charged per operation, and don't pay for the database when you don't use it. -- **Free Tier**: With Azure Cosmos DB free tier, you get the first 1000 RU/s and 25 GB of storage in your account for free forever, applied at the account level. Free tier accounts are [sandboxed](../limit-total-account-throughput.md).+- **Free Tier**: With Azure Cosmos DB free tier, you get the first 1000 RU/s and 25 GB of storage in your account for free forever, applied at the account level. Free tier accounts are automatically [sandboxed](../limit-total-account-throughput.md) so you never pay for usage. - **Free 7 day Continuous Backups**: Azure Cosmos DB for MongoDB offers free 7 day continuous backups for any amount of data. This means that you can restore your database to any point in time within the last 7 days. Cosmos DB for MongoDB has numerous benefits compared to other MongoDB service of - **Flexible single-field indexes**: Unlike single field indexes in MongoDB Atlas, [single field indexes in Cosmos DB for MongoDB](indexing.md) cover multi-field filter queries. There is no need to create compound indexes for each multi-field filter query. This increases developer productivity. -- **Real time analytics (HTAP) at any scale**: Cosmos DB for MongoDB offers the ability to run complex analytical queries. Use cases for these queries include business intelligence that can run against your database data in real time with no effect on your database. This analysis is fast and inexpensive, due to the cloud native analytical columnar store being utilized, with no ETL pipelines. Learn more about the [Azure Synapse Link](../synapse-link.md).- - **In-depth monitoring capabilities**: Cosmos DB for MongoDB integrates natively with [Azure Monitor](../../azure-monitor/overview.md) to provide in-depth monitoring capabilities. ## How Cosmos DB for MongoDB works |
cosmos-db | Quickstart Java Spring Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java-spring-data.md | Azure Cosmos DB is a multi-model database service that lets you quickly create a * An Azure account with an active subscription. * No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.-* [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Set the `JAVA_HOME` environment variable to the JDK install folder. +* [Java Development Kit (JDK) 8](/java/openjdk/download#openjdk-8). Set the `JAVA_HOME` environment variable to the JDK install folder. * A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. * [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. |
cosmos-db | Tutorial Springboot Azure Kubernetes Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-springboot-azure-kubernetes-service.md | In this tutorial, you will set up and deploy a Spring Boot application that expo ## Pre-requisites - An Azure account with an active subscription. Create a [free account](https://azure.microsoft.com/free/) or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the path where the JDK is installed.+- [Java Development Kit (JDK) 8](/java/openjdk/download#openjdk-8). Point your `JAVA_HOME` environment variable to the path where the JDK is installed. - [Azure CLI](/cli/azure/install-azure-cli) to provision Azure services. - [Docker](https://docs.docker.com/engine/install/) to manage images and containers. - A recent version of [Maven](https://maven.apache.org/download.cgi) and [Git](https://www.git-scm.com/downloads). |
cosmos-db | Resource Locks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-locks.md | Title: Prevent Azure Cosmos DB resources from being deleted or changed -description: Use Azure Resource Locks to prevent Azure Cosmos DB resources from being deleted or changed. + Title: Use locks to protect resources ++description: Use Azure resource locks to prevent Azure Cosmos DB resources from being deleted or changed unintentionally. ++ Previously updated : 08/31/2022--- ms.devlang: azurecli Last updated : 03/23/2023+ -# Prevent Azure Cosmos DB resources from being deleted or changed +# Protect Azure Cosmos DB resources with locks [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] -As an administrator, you may need to lock an Azure Cosmos DB account, database or container. Locks prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to ``CanNotDelete`` or ``ReadOnly``. +As an administrator, you may need to lock an Azure Cosmos DB account, database or container. Locks prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to `CanNotDelete` or `ReadOnly`. | Level | Description | | | |-| ``CanNotDelete`` | Authorized users can still read and modify a resource, but they can't delete the resource. | -| ``ReadOnly`` | Authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role. | +| `CanNotDelete` | Authorized users can still read and modify a resource, but they can't delete the resource. | +| `ReadOnly` | Authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role. | ++## Prerequisites ++- An existing Azure Cosmos DB account. + - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal). + - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit. ## How locks are applied When you apply a lock at a parent scope, all resources within that scope inherit Unlike Azure role-based access control, you use management locks to apply a restriction across all users and roles. To learn about role-based access control for Azure Cosmos DB see, [Azure role-based access control in Azure Cosmos DB](role-based-access-control.md). -Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to <https://management.azure.com>. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on an Azure Cosmos DB container prevents you from deleting or modifying the container. It doesn't prevent you from creating, updating, or deleting data in the container. Data transactions are permitted because those operations aren't sent to <https://management.azure.com>. +Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to `https://management.azure.com`. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on an Azure Cosmos DB container prevents you from deleting or modifying the container. It doesn't prevent you from creating, updating, or deleting data in the container. Data transactions are permitted because those operations aren't sent to `https://management.azure.com`. ## Manage locks -Resource locks don't work for changes made by users accessing Azure Cosmos DB using account keys unless the Azure Cosmos DB account is first locked by enabling the ``disableKeyBasedMetadataWriteAccess`` property. Ensure this property doesn't break existing applications that make changes to resources using any SDK, Azure portal, or third party tools. Enabling this property will break applications that connect via account keys and modify resources such as changing throughput, updating index policies, etc. To learn more and to go through a checklist to ensure your applications continue to function, see [preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes) +Resource locks don't work for changes made by users accessing Azure Cosmos DB using account keys unless the Azure Cosmos DB account is first locked by enabling the `disableKeyBasedMetadataWriteAccess` property. Ensure this property doesn't break existing applications that make changes to resources using any SDK, Azure portal, or third party tools. Enabling this property breaks applications that connect via account keys to modify resources. These modifications can include changing throughput, updating index policies, etc. To learn more and to go through a checklist to ensure your applications continue to function, see [preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes) ### [PowerShell](#tab/powershell) ```powershell-interactive-$RESOURCE_GROUP_NAME = "myResourceGroup" -$ACCOUNT_NAME = "my-cosmos-account" -$LOCK_NAME = "$accountName-Lock" +$RESOURCE_GROUP_NAME = "<resource-group>" +$ACCOUNT_NAME = "<account-name>" +$LOCK_NAME = "$ACCOUNT_NAME-lock" ``` First, update the account to prevent changes by anything that connects via account keys. New-AzResourceLock @parameters ### [Azure CLI](#tab/azure-cli) ```azurecli-interactive-resourceGroupName='myResourceGroup' -accountName='my-cosmos-account' +resourceGroupName='<resource-group>' +accountName='<account-name>' lockName="$accountName-Lock" ``` First, update the account to prevent changes by anything that connects via accou ```azurecli-interactive az cosmosdb update \- --name $accountName \ --resource-group $resourceGroupName \+ --name $accountName \ --disable-key-based-metadata-write-access true ``` Create a Delete Lock on an Azure Cosmos DB account resource ```azurecli-interactive az lock create \+ --resource-group $resourceGroupName \ --name $lockName \- --resource-group $resourceGroupName \ --lock-type 'CanNotDelete' \ --resource-type Microsoft.DocumentDB/databaseAccount \ --resource $accountName When applying a lock to an Azure Cosmos DB resource, use the [``Microsoft.Author ```json {- "type": "Microsoft.Authorization/locks", - "apiVersion": "2017-04-01", - "name": "cosmoslock", - "dependsOn": [ - "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]" - ], - "properties": { - "level": "CanNotDelete", - "notes": "Do not delete Azure Cosmos DB account." - }, - "scope": "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]" + "type": "Microsoft.Authorization/locks", + "apiVersion": "2017-04-01", + "name": "cosmoslock", + "dependsOn": [ + "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]" + ], + "properties": { + "level": "CanNotDelete", + "notes": "Do not delete Azure Cosmos DB account." + }, + "scope": "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]" } ``` Manage resource locks for Azure Cosmos DB: ## Next steps -- [Overview of Azure Resource Manager Locks](../azure-resource-manager/management/lock-resources.md)-- [How to audit Azure Cosmos DB control plane operations](audit-control-plane-logs.md)+> [!div class="nextstepaction"] +> [Overview of Azure Resource Manager Locks](../azure-resource-manager/management/lock-resources.md) |
cost-management-billing | Aws Integration Set Up Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-set-up-configure.md | Use the Create a New Role wizard: 4. On the **Select trusted entity** page, select **AWS account** and then under **An AWS account**, select **Another AWS account**. 5. Under **Account ID**, enter **432263259397**. 6. Under **Options**, select **Require external ID (Best practice when a third party will assume this role)**.-7. Under **External ID**, enter the external ID, which is a shared passcode between the AWS role and Cost Management. The same external ID is also used on the **New Connector** page in Cost Management. Microsoft recommends that you use a strong passcode policy when entering the external ID. +7. Under **External ID**, enter the external ID, which is a shared passcode between the AWS role and Cost Management. The same external ID is also used on the **New Connector** page in Cost Management. Microsoft recommends that you use a strong passcode policy when entering the external ID. The external ID should comply with AWS restrictions: + - Type: String + - Length constraints: Minimum length of 2. Maximum length of 1224. + - Must satisfy regular expression pattern: [\w+=,.@: /-]* > [!NOTE] > Don't change the selection for **Require MFA**. It should remain cleared. 8. Select **Next: Permissions**. |
cost-management-billing | Mca Request Billing Ownership | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md | tags: billing Previously updated : 11/14/2022 Last updated : 03/23/2023 This article helps you transfer billing ownership for your Azure products (subsc [Check if you have access to a Microsoft Customer Agreement](#check-for-access). -The transition moves only the billing responsibility for your Azure products ΓÇô the Azure resources tied to your products don't move, so the transition won't interrupt your Azure services. +The transition moves only the billing responsibility for your Azure products ΓÇô the Azure resources tied to your products don't move, so the transition doesn't interrupt your Azure services. -This process contains the following primary tasks, which weΓÇÖll guide you through step by step: +This process contains the following primary tasks: 1. Request billing ownership 2. Review and approve the transfer request The person creating the transfer request uses the following procedure to create When the request is created, an email is sent to the target recipient. -The following procedure has you navigate to **Transfer requests** by selecting a **Billing scope** > **Billing account** > **Billing profile** > **Invoice sections** to **Add a new request**. If you navigate to **Add a new request** from selecting a billing profile, you'll have to select a billing profile and then select an invoice section. +The following procedure has you navigate to **Transfer requests** by selecting a **Billing scope** > **Billing account** > **Billing profile** > **Invoice sections** to **Add a new request**. If you navigate to **Add a new request** from selecting a billing profile, select a billing profile, and then select an invoice section. 1. Sign in to the [Azure portal](https://portal.azure.com) as an invoice section owner or contributor for a billing account for Microsoft Customer Agreement. Use the same credentials that you used to accept your Microsoft Customer Agreement. 1. Search for **Cost Management + Billing**. :::image type="content" source="./media/mca-request-billing-ownership/billing-search-cost-management-billing.png" alt-text="Screenshot that shows Azure portal search for Cost Management + Billing." lightbox="./media/mca-request-billing-ownership/billing-search-cost-management-billing.png" ::: 1. On the billing scopes page, select **Billing scopes** and then select the billing account, which would be used to pay for Azure usage in your products. Select the billing account labeled **Microsoft Customer Agreement**. :::image type="content" source="./media/mca-request-billing-ownership/billing-scopes.png" alt-text="Screenshot that shows search in portal for Cost Management + Billing." lightbox="./media/mca-request-billing-ownership/billing-scopes.png" ::: - The Azure portal remembers the last billing scope that you access and displays the scope the next time you come to Cost Management + Billing page. You won't see the billing scopes page if you have visited Cost Management + Billing earlier. If so, check that you are in the [right scope](#check-for-access). If not, [switch the scope](view-all-accounts.md#switch-billing-scope-in-the-azure-portal) to select the billing account for a Microsoft Customer Agreement. -1. Select **Billing profiles** from the left-hand side and then select a **Billing profile** from the list. Once you take over the ownership of the products, their usage will be billed to this billing profile. + The Azure portal remembers the last billing scope that you access and displays the scope the next time you come to Cost Management + Billing page. You don't see the billing scopes page if you visited Cost Management + Billing earlier. If so, check that you are in the [right scope](#check-for-access). If not, [switch the scope](view-all-accounts.md#switch-billing-scope-in-the-azure-portal) to select the billing account for a Microsoft Customer Agreement. +1. Select **Billing profiles** from the left-hand side and then select a **Billing profile** from the list. Once you take over the ownership of the products, their usage is billed to this billing profile. :::image type="content" source="./media/mca-request-billing-ownership/billing-profile.png" alt-text="Screenshot that shows selecting billing profiles." lightbox="./media/mca-request-billing-ownership/billing-profile.png" ::: *If you don't see Billing profiles, you aren't in the right billing scope.* You need to select a billing account for a Microsoft Customer Agreement and then select Billing profiles. To learn how to change scopes, see [Switch billing scopes in the Azure portal](view-all-accounts.md#switch-billing-scope-in-the-azure-portal). 1. Select **Invoice sections** from the left-hand side and then select an invoice section from the list. Each billing profile contains on invoice section by default. Select the invoice where you want to move your Azure product billing - that's where the Azure product consumption is transferred to. The recipient of the transfer request uses the following procedure to review and 1. In the Azure portal, the user selects the billing account that they want to transfer Azure products from. Then they select eligible subscriptions on the **Subscriptions** tab. If the owner doesnΓÇÖt want to transfer subscriptions and instead wants to transfer reservations only, make sure that no subscriptions are selected. :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-subscriptions-select.png" alt-text="Screenshot showing the Subscriptions tab." lightbox="./media/mca-request-billing-ownership/review-transfer-request-subscriptions-select.png" ::: *Disabled subscriptions can't be transferred.*-1. If there are reservations available to transfer, select the **Reservations** tab and then select them. If reservations wonΓÇÖt be transferred, make sure that no reservations are selected. +1. If there are reservations available to transfer, select the **Reservations** tab, and then select them. If you don't want to transfer reservations, make sure that no reservations are selected. If reservations are transferred, they're applied to the scope thatΓÇÖs set in the request. If you want to change the scope of the reservation after itΓÇÖs transferred, see [Change the reservation scope](../reservations/manage-reserved-vm-instance.md#change-the-reservation-scope). :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-reservations-select.png" alt-text="Screenshot showing the Reservations tab." lightbox="./media/mca-request-billing-ownership/review-transfer-request-reservations-select.png" :::-1. If there are savings plans available to transfer, select the **Saving plan** tab and then select them. If savings plans wonΓÇÖt be transferred, make sure that no savings plans are selected. +1. If there are savings plans available to transfer, select the **Saving plan** tab, and then select them. If you don't want to transfer savings plans, make sure that no savings plans are selected. If savings plans are transferred, they're applied to the scope thatΓÇÖs set in the request. If you want to change the scope of the savings plan after itΓÇÖs transferred, see [Change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope). :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-savings-plan-select.png" alt-text="Screenshot showing the Savings plan tab." lightbox="./media/mca-request-billing-ownership/review-transfer-request-savings-plan-select.png" ::: 1. Select the **Review request** tab and verify the information about the products to transfer. If there are Warnings or Failed status messages, see the following information. When you're ready to continue, select **Transfer**. :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-complete.png" alt-text="Screenshot showing the Review request tab where you review your transfer selections." lightbox="./media/mca-request-billing-ownership/review-transfer-request-complete.png" :::-1. You'll briefly see a `Transfer is in progress` message. When the transfer is completed successfully, you'll see the Transfer details page with the `Transfer completed successfully` message. +1. The `Transfer is in progress` message is briefly shown. When the transfer is completed successfully, you see the Transfer details page with the `Transfer completed successfully` message. :::image type="content" source="./media/mca-request-billing-ownership/transfer-completed-successfully.png" alt-text="Screenshot showing the Transfer completed successfully page." lightbox="./media/mca-request-billing-ownership/transfer-completed-successfully.png" ::: On the Review request tab, the following status messages might be displayed. * **Ready to transfer** - Validation for this Azure product has passed and can be transferred.-* **Warnings** - There's a warning for the selected Azure product. While the product can still be transferred, doing so will have some consequence that the user should be aware of in case they want to take mitigating actions. For example, the Azure subscription being transferred is benefitting from a reservation. After transfer, the subscription will no longer receive that benefit. To maximize savings, ensure that the reservation is associated to another subscription that can use its benefits. Instead, the user can also choose to go back to the selection page and unselect this Azure subscription. Select **Check details** for more information. -* **Failed** - The selected Azure product can't be transferred because of an error. User will need to go back to the selection page and unselect this product to transfer the other selected Azure products. +* **Warnings** - There's a warning for the selected Azure product. While the product can still be transferred, doing so has some consequence that the user should be aware of in case they want to take mitigating actions. For example, the Azure subscription being transferred is benefitting from a reservation. After transfer, the subscription will no longer receive that benefit. To maximize savings, ensure that the reservation is associated to another subscription that can use its benefits. Instead, the user can also choose to go back to the selection page and unselect this Azure subscription. Select **Check details** for more information. +* **Failed** - The selected Azure product can't be transferred because of an error. The user needs to go back to the selection page and unselect this product to transfer the other selected Azure products. ## Check the transfer request status As the user that approved the transfer: ## Supported subscription types -You can request billing ownership of products for the subscription types listed below. +You can request billing ownership of products for the following subscription types. - [Action pack](https://azure.microsoft.com/offers/ms-azr-0025p/)┬╣-- [Azure in Open Licensing](https://azure.microsoft.com/offers/ms-azr-0111p/)┬╣ - [Azure Pass Sponsorship](https://azure.microsoft.com/offers/azure-pass/)┬╣ - [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)-- [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/)┬╣ - [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/) - [Pay-As-You-Go Dev/Test](https://azure.microsoft.com/offers/ms-azr-0023p/) - [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)┬▓ |
data-factory | Parameterize Linked Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md | All the linked service types are supported for parameterization. - Amazon S3 Compatible Storage - Azure Blob Storage - Azure Cosmos DB for NoSQL+- Azure Databricks Delta Lake - Azure Data Explorer - Azure Data Lake Storage Gen1 - Azure Data Lake Storage Gen2 |
data-manager-for-agri | How To Set Up Private Links | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-private-links.md | + + Title: Creating a private endpoint for Azure Data Manager for Agriculture +description: Learn how to use private links in Azure Data Manager for Agriculture ++++ Last updated : 03/22/2023++++# Create a private endpoint for Azure Data Manager for Agriculture ++[Azure Private Link](../private-link/private-link-overview.md) provides private connectivity from a virtual network to Azure platform as a service (PaaS). It simplifies the network architecture and secures the connection between endpoints in Azure by eliminating data exposure to the public internet. ++By using Azure Private Link, you can connect to an Azure Data Manager for Agriculture service from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Azure Data Manager for Agriculture Preview instance over these private IP addresses. ++This article describes how to create a private endpoint and approval process for Azure Data Manager for Agriculture Preview. ++## How to set up a private endpoint +Private Endpoints can be created using the Azure portal, PowerShell, or the Azure CLI: ++* [Azure portal](../private-link/create-private-endpoint-portal.md) +* [PowerShell](../private-link/create-private-endpoint-powershell.md) +* [CLI](../private-link/create-private-endpoint-cli.md) ++### Approval process for a private endpoint +Once the network admin creates the private endpoint, the Data Manager for Agriculture admin can manage the private endpoint connection to Data Manager for Agriculture resource. ++1. Navigate to the Data Manager for Agriculture resource in Azure portal. Select the Networking tab in the left pane, this will show a list of all Private Endpoint Connections and Corresponding Private Endpoint created. + :::image type="content" source="./media/how-to-set-up-private-links/pec-data-manager-agriculture.png" alt-text="Screenshot showing list of private endpoint connections in Azure portal."::: ++2. Select an individual private endpoint connection from the list. + :::image type="content" source="./media/how-to-set-up-private-links/pec-select.png" alt-text="Screenshot showing how to select a private endpoint."::: ++3. The Data Manager for Agriculture administrator can choose to approve or reject a private endpoint connection and can optionally add a short text response also. + :::image type="content" source="./media/how-to-set-up-private-links/pec-approve.png" alt-text="Screenshot showing how to approve a private endpoint connection."::: ++4. After approval or rejection, the list will reflect the appropriate state along with the response text. + :::image type="content" source="./media/how-to-set-up-private-links/pec-list-after.png" alt-text="Screenshot showing private endpoint connection status."::: ++5. Finally click on the private endpoint name to see the network interface details and IP address of your private endpoint. + :::image type="content" source="./media/how-to-set-up-private-links/pec-click.png" alt-text="Screenshot showing where to click to get network interface details."::: + :::image type="content" source="./media/how-to-set-up-private-links/pec-list-after.png" alt-text="Screenshot showing private endpoint connection status."::: + :::image type="content" source="./media/how-to-set-up-private-links/pec-ip-display-new.png" alt-text="Screenshot showing private endpoint IP address."::: ++## Disable public access to your Data Manager for Agriculture resource +If you want to disable all public access to your Data Manager for Agriculture resource and allow connections only from your virtual network then you need to ensure that your private endpoint connections are enabled and configured. To disable public access to your Data Manager for Agriculture resource: ++1. Go to the Networking page of your Data Manager for Agriculture resource. +2. Select the Deny public network access checkbox. +++## Next steps ++* See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md). +* Understand our APIs [here](/rest/api/data-manager-for-agri). + |
ddos-protection | Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md | You can keep your resources for the next tutorial. If no longer needed, delete t 1. Select the alerts created in this tutorial, then select **Delete**. ## Next steps -In this article, you learned how to configure metric alerts through Azure Monitor. --To learn how to test and simulate a DDoS attack, see the simulation testing guide: --> [!div class="nextstepaction"] -> [Test through simulations](test-through-simulations.md) +* [Test through simulations](test-through-simulations.md) +* [View alerts in Microsoft Defender for Cloud](ddos-view-alerts-defender-for-cloud.md) |
ddos-protection | Ddos Configure Log Analytics Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-configure-log-analytics-workspace.md | For more information, see [Log Analytics workspace overview](../azure-monitor/lo ## Next steps -In this article, you learned how to configure a Log Analytics workspace for Azure DDoS Protection. --To learn how to configure diagnostic logging, see the diagnostic logging guide: --> [!div class="nextstepaction"] -> [Test through simulations](test-through-simulations.md) +* [configure diagnostic logging alerts](ddos-diagnostic-alert-templates.md) |
ddos-protection | Ddos Diagnostic Alert Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-diagnostic-alert-templates.md | You can keep your resources for the next guide. If no longer needed, delete the ## Next steps -In this article, you learned how to configure diagnostic logging alerts through Azure Monitor. --To learn how to test and simulate a DDoS attack, see the simulation testing guide: --> [!div class="nextstepaction"] -> [Test through simulations](test-through-simulations.md) +* [Test through simulations](test-through-simulations.md) +* [View alerts in Microsoft Defender for Cloud](ddos-view-alerts-defender-for-cloud.md) |
ddos-protection | Ddos Disaster Recovery Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-disaster-recovery-guidance.md | To create a virtual network, see [Create a virtual network](../virtual-network/m ## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).+- Learn how to [configure diagnostic logging](diagnostic-logging.md). |
ddos-protection | Ddos Protection Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-features.md | Learn how your services will respond to an attack by [testing through simulation ## Next steps -- Learn how to [create an Azure DDoS Protection plan](manage-ddos-protection.md).+- Learn more about [reference architectures](ddos-protection-reference-architectures.md). |
ddos-protection | Ddos Protection Reference Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md | For more information about hub-and-spoke topology, see [Hub-spoke network topolo ## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).+- Learn how to [configure Network Protection](manage-ddos-protection.md). |
ddos-protection | Ddos Protection Sku Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md | The following table shows features and corresponding SKUs. ## Next steps -* [Quickstart: Create an Azure DDoS Protection Plan](manage-ddos-protection.md) * [Azure DDoS Protection features](ddos-protection-features.md)+* [Reference architectures](ddos-protection-reference-architectures.md) + |
ddos-protection | Ddos Response Strategy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-response-strategy.md | If you suspect you're under a DDoS attack, escalate through your normal Azure Su ## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).+- Learn how to [configure metric alerts through portal](alerts.md). +- Learn how to [engage DDoS Rapid Response](ddos-rapid-response.md). |
ddos-protection | Ddos View Alerts Defender For Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-view-alerts-defender-for-cloud.md | In this How-To, you learned how to view alerts in Microsoft Defender for Cloud. To learn how to test and simulate a DDoS attack, see the simulation testing guide: > [!div class="nextstepaction"]-> [Test through simulations](test-through-simulations.md) +> [Engage with Azure DDoS Rapid Response](ddos-rapid-response.md) |
ddos-protection | Ddos View Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-view-diagnostic-logs.md | In this guide, you'll learn how to view Azure DDoS Protection diagnostic logs, i - Configure DDoS Protection diagnostic logs. To learn more, see [Configure diagnostic logs](diagnostic-logging.md). - Simulate an attack using one of our simulation partners. To learn more, see [Test with simulation partners](test-through-simulations.md). -### View in log analytics workspace +### View in Log Analytics workspace 1. Sign in to the [Azure portal](https://portal.azure.com/). 1. In the search box at the top of the portal, enter **Log Analytics workspace**. Select **Log Analytics workspace** in the search results. Attack mitigation reports use the Netflow protocol data, which is aggregated to ## Next steps -*[Engage DDoS Rapid Response](ddos-rapid-response.md) +* [Engage DDoS Rapid Response](ddos-rapid-response.md) |
ddos-protection | Diagnostic Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/diagnostic-logging.md | For more information on log schemas, see [View diagnostic logs](ddos-view-diagno ## Next steps -In this guide, you learned how to configure Azure DDoS Protection diagnostic logs, including notifications, mitigation reports and mitigation flow logs. --To learn how to configure attack alerts, continue to the next guide. --> [!div class="nextstepaction"] -> [Configure DDoS protection alerts](alerts.md) +* [Test through simulations](test-through-simulations.md) +* [View logs in Log Analytics workspace](ddos-view-diagnostic-logs.md) |
ddos-protection | Fundamental Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/fundamental-best-practices.md | We often see customers' on-premises resources getting attacked along with their ## Next steps -- Learn how to [create an Azure DDoS protection plan](manage-ddos-protection.md).+* Learn more about [business continuity](ddos-disaster-recovery-guidance.md). |
ddos-protection | Manage Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-permissions.md | For customers who have various subscriptions, and who want to ensure a single pl ## Next steps -To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials. --> [!div class="nextstepaction"] -> [View and configure DDoS protection telemetry](telemetry.md) +* [View and configure DDoS protection telemetry](telemetry.md) |
ddos-protection | Types Of Attacks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/types-of-attacks.md | Azure DDoS Protection protects resources in a virtual network including public I ## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).+* [Components of a DDoS response strategy](ddos-response-strategy.md). |
defender-for-iot | References Data Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-data-retention.md | Last updated 01/22/2023 # Data retention across Microsoft Defender for IoT -Microsoft Defender for IoT stores data in the Azure portal, on OT network sensors, and on-premises management consoles. +Microsoft Defender for IoT sensors learn a baseline of your network traffic during the initial learning period after deployment. This learned baseline is stored indefinately on your sensors. ++Defender for IoT also stores other data in the Azure portal, on OT network sensors, and on-premises management consoles. Each storage location affords a certain storage capacity and retention times. This article describes how much and how long each type of data is stored in each location before it's either deleted or overridden. For more information, see: - [Manage individual OT network sensors](how-to-manage-individual-sensors.md) - [Manage OT network sensors from an on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md) - [Manage an on-premises management console](how-to-manage-the-on-premises-management-console.md)+- [Azure data encryption](/azure/security/fundamentals/encryption-overview) |
digital-twins | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md | You can read about the service limits of Azure Digital Twins in the [Azure Digit ### Terminology -You can view a list of common IoT terms and their uses across the Azure IoT services, including Azure Digital Twins, in the [Azure IoT Glossary](../iot-fundamentals/iot-glossary.md?toc=/azure/digital-twins/toc.json&bc=/azure/digital-twins/breadcrumb/toc.json). This resource may be a useful reference while you get started with Azure Digital Twins and building an IoT solution. +You can view a list of common IoT terms and their uses across the Azure IoT services, including Azure Digital Twins, in the [Azure IoT Glossary](../iot/iot-glossary.md?toc=/azure/digital-twins/toc.json&bc=/azure/digital-twins/breadcrumb/toc.json). This resource may be a useful reference while you get started with Azure Digital Twins and building an IoT solution. ## Next steps |
event-hubs | Event Hubs Federation Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-federation-overview.md | Practically, that means your solution will maintain multiple Event Hubs, often in different regions and Event Hubs namespaces, and then replicate events between them. You might also exchange events with sources and targets like [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md), [Azure-IoT Hub](../iot-fundamentals/iot-introduction.md), or [Apache +IoT Hub](../iot/iot-introduction.md), or [Apache Kafka](https://kafka.apache.org). Maintaining multiple active Event Hubs in different regions also allows clients |
hdinsight | Hdinsight Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md | This table lists the versions of HDInsight that are available in the Azure porta | HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability | | | | | | | | |-| HDInsight 5.1 |Ubuntu 18.0.4 LTS |Feb 27, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes | +| HDInsight 5.1 |Ubuntu 18.0.4 LTS |Feb 27, 2023 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes | | HDInsight 5.0 |Ubuntu 18.0.4 LTS |July 01, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes | | [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced | Not announced |Yes | |
iot-central | Concepts Device Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md | For a device to interact with IoT Central, it must be assigned to a device templ ### Automatic assignment -IoT Central can automatically assign a device to a device template when the device connects. A device should send a [model ID](../../iot-fundamentals/iot-glossary.md?toc=/azure/iot-central/toc.json&bc=/azure/iot-central/breadcrumb/toc.json#model-id) when it connects. IoT Central uses the model ID to identify the device template for that specific device model. The discovery process works as follows: +IoT Central can automatically assign a device to a device template when the device connects. A device should send a [model ID](../../iot/iot-glossary.md?toc=/azure/iot-central/toc.json&bc=/azure/iot-central/breadcrumb/toc.json#model-id) when it connects. IoT Central uses the model ID to identify the device template for that specific device model. The discovery process works as follows: 1. If the device template is already published in the IoT Central application, the device is assigned to the device template. 1. If the device template isn't already published in the IoT Central application, IoT Central looks for the device model in the [public model repository](https://github.com/Azure/iot-plugandplay-models). If IoT Central finds the model, it uses it to generate a basic device template. |
iot-central | Howto Manage Device Templates With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md | The IoT Central REST API lets you: * Get a list of the device templates in the application * Get a device template by ID * Delete a device template in your application+* Filter the list of device templates in the application ## Add a device template The response to this request looks like the following example: ### Use ODATA filters -You can use ODATA filters to filter the results returned by the list device templates API. +In the preview version of the API (`api-version=2022-10-31-preview`), you can use ODATA filters to filter and sort the results returned by the list device templates API. -### $top +### maxpagesize -Use the **$top** filter to set the result size. The maximum returned result size is 100, and the default size is 25. +Use the **maxpagesize** filter to set the result size. The maximum returned result size is 100, and the default size is 25. Use the following request to retrieve the top 10 device templates from your application: ```http-GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-07-31&$top=10 +GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-10-31-preview&maxpagesize=10 ``` The response to this request looks like the following example: The response to this request looks like the following example: "dtmi:dtdl:context;2" ] },- ... + // ... ],- "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/deviceTemplates?api-version=2022-07-31&%24top=1&%24skiptoken=%7B%22token%22%3A%22%2BRID%3A%7EJWYqAKZQKp20qCoAAAAACA%3D%3D%23RT%3A1%23TRC%3A1%23ISV%3A2%23IEO%3A65551%23QCF%3A4%22%2C%22range%22%3A%7B%22min%22%3A%2205C1DFFFFFFFFC%22%2C%22max%22%3A%22FF%22%7D%7D" + "nextLink": "https://{your app subdomain}.azureiotcentral.com/api/deviceTemplates?api-version=2022-07-31&%24top=1&%24skiptoken=%7B%22token%22%3A%22%2BRID%3A%7EJWYqAKZQKp20qCoAAAAACA%3D%3D%23RT%3A1%23TRC%3A1%23ISV%3A2%23IEO%3A65551%23QCF%3A4%22%2C%22range%22%3A%7B%22min%22%3A%2205C1DFFFFFFFFC%22%2C%22max%22%3A%22FF%22%7D%7D" } ``` The response includes a **nextLink** value that you can use to retrieve the next page of results. -### $filter +### filter -Use **$filter** to create expressions that filter the list of device templates. The following table shows the comparison operators you can use: +Use **filter** to create expressions that filter the list of device templates. The following table shows the comparison operators you can use: | Comparison Operator | Symbol | Example | | -- | | | Use **$filter** to create expressions that filter the list of device templates. | Greater than or equals | ge | `displayName ge 'template A'` | | Greater than | gt | `displayName gt 'template B'` | -The following table shows the logic operators you can use in *$filter* expressions: +The following table shows the logic operators you can use in *filter* expressions: | Logic Operator | Symbol | Example | | -- | | | | AND | and | `'@id' eq 'dtmi:example:test;1' and capabilityModelId eq 'dtmi:example:test:model1;1'` | | OR | or | `'@id' eq 'dtmi:example:test;1' or displayName ge 'template'` | -Currently, *$filter* works with the following device template fields: +Currently, *filter* works with the following device template fields: | FieldName | Type | Description | | -- | | -- | Currently, *$filter* works with the following device template fields: | `displayName` | string | Device template display name | | `capabilityModelId` | string | Device template capability model ID | -**$filter supported functions:** +**filter supported functions:** Currently, the only supported filter function for device template lists is the `contains` function: ```txt-$filter=contains(displayName, 'template1') -$filter=contains(displayName, 'template1) eq false +filter=contains(displayName, 'template1') ``` The following example shows how to retrieve all the device templates where the display name contains the string `thermostat`: ```http-GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat') +GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-10-31-preview&filter=contains(displayName, 'thermostat') ``` The response to this request looks like the following example: The response to this request looks like the following example: } ``` -### $orderby +### orderby -Use **$orderby** to sort the results. Currently, **$orderby** only lets you sort on **displayName**. By default, **$orderby** sorts in ascending order. Use **desc** to sort in descending order, for example: +Use **orderby** to sort the results. Currently, **orderby** only lets you sort on **displayName**. By default, **orderby** sorts in ascending order. Use **desc** to sort in descending order, for example: ```txt-$orderby=displayName -$orderby=displayName desc +orderby=displayName +orderby=displayName desc ``` The following example shows how to retrieve all the device templates where the result is sorted by `displayName`: ```http-GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-07-31&$orderby=displayName +GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-10-31-preview&orderby=displayName ``` The response to this request looks like the following example: You can also combine two or more filters. The following example shows how to retrieve the top two device templates where the display name contains the string `thermostat`. ```http-GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat')&$top=2 +GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-10-31-preview&filter=contains(displayName, 'thermostat')&maxpagesize=2 ``` The response to this request looks like the following example: |
iot-central | Howto Manage Devices With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md | Title: How to use the IoT Central REST API to manage devices description: How to use the IoT Central REST API to add devices in an application Previously updated : 11/30/2022 Last updated : 03/23/2023 The IoT Central REST API lets you: * Get a device by ID * Get a device credential * Delete a device in your application+* Filter the list of devices in the application ### Add a device If you're adding an IoT Edge device, you can use the API to assign an IoT Edge d ### Use ODATA filters -You can use ODATA filters to filter the results returned by the list devices API. +In the preview version of the API (`api-version=2022-10-31-preview`), you can use ODATA filters to filter and sort the results returned by the list devices API. -### $top +### maxpagesize -Use the **$top** to set the result size, the maximum returned result size is 100, the default size is 25. +Use the **maxpagesize** to set the result size, the maximum returned result size is 100, the default size is 25. Use the following request to retrieve a top 10 device from your application: ```http-GET https://{your app subdomain}/api/devices?api-version=2022-07-31&$top=10 +GET https://{your app subdomain}/api/devices?api-version=2022-10-31-preview&maxpagesize=10 ``` The response to this request looks like the following example: The response to this request looks like the following example: }, ... ],- "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/devices?api-version=2022-07-31&%24top=1&%24skiptoken=%257B%2522token%2522%253A%2522%252BRID%253A%7EJWYqAOis7THQbBQAAAAAAg%253D%253D%2523RT%253A1%2523TRC%253A1%2523ISV%253A2%2523IEO%253A65551%2523QCF%253A4%2522%252C%2522range%2522%253A%257B%2522min%2522%253A%2522%2522%252C%2522max%2522%253A%252205C1D7F7591D44%2522%257D%257D" + "nextLink": "https://{your app subdomain}.azureiotcentral.com/api/devices?api-version=2022-07-31&%24top=1&%24skiptoken=%257B%2522token%2522%253A%2522%252BRID%253A%7EJWYqAOis7THQbBQAAAAAAg%253D%253D%2523RT%253A1%2523TRC%253A1%2523ISV%253A2%2523IEO%253A65551%2523QCF%253A4%2522%252C%2522range%2522%253A%257B%2522min%2522%253A%2522%2522%252C%2522max%2522%253A%252205C1D7F7591D44%2522%257D%257D" } ``` The response includes a **nextLink** value that you can use to retrieve the next page of results. -### $filter +### filter -Use **$filter** to create expressions that filter the list of devices. The following table shows the comparison operators you can use: +Use **filter** to create expressions that filter the list of devices. The following table shows the comparison operators you can use: -| Comparison Operator | Symbol | Example | -| -- | | | -| Equals | eq | `id eq 'device1' and scopes eq 'redmond'` | -| Not Equals | ne | `Enabled ne true` | -| Less than or equals | le | `contains(displayName, 'device1') le -1` | -| Less than | lt | `contains(displayName, 'device1') lt 0` | -| Greater than or equals | ge | `contains(displayName, 'device1') ge 0` | -| Greater than | gt | `contains(displayName, 'device1') gt 0` | +| Comparison Operator | Symbol | Example | +||--|-| +| Equals | eq | `id eq 'device1' and scopes eq 'redmond'` | +| Not Equals | ne | `Enabled ne true` | +| Less than or equals | le | `id le '26whl7mure6'` | +| Less than | lt | `id lt '26whl7mure6'` | +| Greater than or equals | ge | `id ge '26whl7mure6'` | +| Greater than | gt | `id gt '26whl7mure6'` | -The following table shows the logic operators you can use in *$filter* expressions: +The following table shows the logic operators you can use in *filter* expressions: | Logic Operator | Symbol | Example | | -- | | - | | AND | and | `id eq 'device1' and enabled eq true` | | OR | or | `id eq 'device1' or simulated eq false` | -Currently, *$filter* works with the following device fields: +Currently, *filter* works with the following device fields: | FieldName | Type | Description | | -- | - | - | Currently, *$filter* works with the following device fields: | `template` | string | Device template ID | | `scopes` | string | organization ID | -**$filter supported functions:** +**filter supported functions:** Currently, the only supported filter function for device lists is the `contains` function: ```http-$filter=contains(displayName, 'device1') ge 0 +filter=contains(displayName, 'device1') ``` The following example shows how to retrieve all the devices where the display name contains the string `thermostat`: ```http-GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat') +GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-10-31-preview&filter=contains(displayName, 'thermostat') ``` The response to this request looks like the following example: The response to this request looks like the following example: } ``` -### $orderby +### orderby -Use **$orderby** to sort the results. Currently, **$orderby** only lets you sort on **displayName**. By default, **$orderby** sorts in ascending order. Use **desc** to sort in descending order, for example: +Use **orderby** to sort the results. Currently, **orderby** only lets you sort on **displayName**. By default, **orderby** sorts in ascending order. Use **desc** to sort in descending order, for example: ```http-$orderby=displayName -$orderby=displayName desc +orderby=displayName +orderby=displayName desc ``` The following example shows how to retrieve all the device templates where the result is sorted by `displayName` : ```http-GET https://{your app subdomain}/api/devices?api-version=2022-07-31&$orderby=displayName +GET https://{your app subdomain}/api/devices?api-version=2022-10-31-preview&orderby=displayName ``` The response to this request looks like the following example: The response to this request looks like the following example: You can also combine two or more filters. -The following example shows how to retrieve the top two devices where the display name contains the string `thermostat`. +The following example shows how to retrieve the top three devices where the display name contains the string `Thermostat`. ```http-GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat')&$top=2 +GET https://{your app subdomain}/api/deviceTemplates?api-version=2022-10-31-preview&filter=contains(displayName, 'Thermostat')&maxpagesize=3 ``` The response to this request looks like the following example: ```json {- "value": [ - { - "id": "5jcwskdwbm", - "etag": "eyJoZWFkZXIiOiJcIjI0MDBlMDdjLTAwMDAtMDMwMC0wMDAwLTYxYjgxYmVlMDAwMFwiIn0", - "displayName": "thermostat1", - "simulated": false, - "provisioned": false, - "template": "dtmi:contoso:Thermostat;1", - "enabled": true - }, - { - "id": "ccc", - "etag": "eyJoZWFkZXIiOiJcIjI0MDAwYjdkLTAwMDAtMDMwMC0wMDAwLTYxYjgxZDJjMDAwMFwiIn0", - "displayName": "thermostat2", - "simulated": true, - "provisioned": true, - "template": "dtmi:contoso:Thermostat;1", - "enabled": true - } - ] + "value": [ + { + "id": "1fpwlahp0zp", + "displayName": "Thermostat - 1fpwlahp0zp", + "simulated": false, + "provisioned": false, + "etag": "eyJwZ0luc3RhbmNlIjoiYTRjZGQyMjQtZjIxMi00MTI4LTkyMTMtZjcwMTBlZDhkOWQ0In0=", + "template": "dtmi:contoso:mythermostattemplate;1", + "enabled": true + }, + { + "id": "1yg0zvpz9un", + "displayName": "Thermostat - 1yg0zvpz9un", + "simulated": false, + "provisioned": false, + "etag": "eyJwZ0luc3RhbmNlIjoiZGQ1YTY4MDUtYzQxNS00ZTMxLTgxM2ItNTRiYjdiYWQ1MWQ2In0=", + "template": "dtmi:contoso:mythermostattemplate;1", + "enabled": true + }, + { + "id": "20cp9l96znn", + "displayName": "Thermostat - 20cp9l96znn", + "simulated": false, + "provisioned": false, + "etag": "eyJwZ0luc3RhbmNlIjoiNGUzNWM4OTItNDBmZi00OTcyLWExYjUtM2I4ZjU5NGZkODBmIn0=", + "template": "dtmi:contoso:mythermostattemplate;1", + "enabled": true + } + ], + "nextLink": "https://{your app subdomain}.azureiotcentral.com/api/devices?api-version=2022-10-31-preview&filter=contains%28displayName%2C+%27Thermostat%27%29&maxpagesize=3&$skiptoken=aHR0cHM6Ly9pb3RjLXByb2QtbG4taW5ma3YteWRtLnZhdWx0LmF6dXJlLm5ldC9zZWNyZXRzL2FwaS1lbmMta2V5LzY0MzZkOTY2ZWRjMjRmMDQ5YWM1NmYzMzFhYzIyZjZi%3AgWMDkfdpzBF0eYiYCGRdGQ%3D%3D%3ATVTgi5YVv%2FBfCd7Oos6ayrCIy9CaSUVu2ULktGQoHZDlaN7uPUa1OIuW0MCqT3spVXlSRQ9wgNFXsvb6mXMT3WWapcDB4QPynkI%2FE1Z8k7s3OWiBW3EQpdtit3JTCbj8qRNFkA%3D%3D%3Aq63Js0HL7OCq%2BkTQ19veqA%3D%3D" } ``` ## Device groups +You can create device groups in an IoT Central application to monitor aggregate data, to use with jobs, and to manage access. Device groups are defined by a filter that selects the devices to add to the group. You can create device groups in the IoT Central portal or by using the API. + ### Add a device group Use the following request to create a new device group. |
iot-central | Troubleshoot Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md | To display telemetry from components hosted in IoT Edge modules correctly, use [ If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support). -For more information, see [Azure IoT support and help options](../../iot-fundamentals/iot-support-help.md). +For more information, see [Azure IoT support and help options](../../iot/iot-support-help.md). |
iot-central | Troubleshoot Data Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-data-export.md | To learn more, see [Export data](howto-export-data.md?tabs=managed-identity). If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support). -For more information, see [Azure IoT support and help options](../../iot-fundamentals/iot-support-help.md). +For more information, see [Azure IoT support and help options](../../iot/iot-support-help.md). |
iot-develop | Concepts Developer Guide Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-device.md | To build an IoT Plug and Play device, module, or IoT Edge module, follow these s 1. Update your device or module to announce the `model-id` as part of the device connection. 1. Implement telemetry, properties, and commands that follow the [IoT Plug and Play conventions](concepts-convention.md) -Once your device or module implementation is ready, use the [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md) to validate that the device follows the IoT Plug and Play conventions. +Once your device or module implementation is ready, use the [Azure IoT explorer](../iot/howto-use-iot-explorer.md) to validate that the device follows the IoT Plug and Play conventions. :::zone pivot="programming-language-ansi-c" |
iot-develop | Concepts Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-digital-twin.md | Now that you've learned about digital twins, here are some more resources: - [How to use IoT Plug and Play digital twin APIs](howto-manage-digital-twin.md) - [Interact with a device from your solution](tutorial-service.md) - [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin)-- [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md)+- [Azure IoT explorer](../iot/howto-use-iot-explorer.md) |
iot-develop | Concepts Model Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-discovery.md | Now that you've learned how to integrate IoT Plug and Play models in an IoT solu - [Interact with a device from your solution](tutorial-service.md) - [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin)-- [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md)+- [Azure IoT explorer](../iot/howto-use-iot-explorer.md) |
iot-develop | Howto Manage Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-manage-digital-twin.md | Now that you've learned about digital twins, here are some more resources: - [Interact with a device from your solution](tutorial-service.md) - [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin)-- [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md)+- [Azure IoT explorer](../iot/howto-use-iot-explorer.md) |
iot-develop | Overview Iot Plug And Play | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/overview-iot-plug-and-play.md | As a solution builder, you can use [IoT Central](../iot-central/core/overview-io The web UI in IoT Central lets you monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. IoT Plug and Play devices connect directly to an IoT Central application. Here you can use customizable dashboards to monitor and control your devices. You can also use device templates in the IoT Central web UI to create and edit DTDL models. -IoT Hub - a managed cloud service - acts as a message hub for secure, bi-directional communication between your IoT application and your devices. When you connect an IoT Plug and Play device to an IoT hub, you can use the [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md) tool to view the telemetry, properties, and commands defined in the DTDL model. +IoT Hub - a managed cloud service - acts as a message hub for secure, bi-directional communication between your IoT application and your devices. When you connect an IoT Plug and Play device to an IoT hub, you can use the [Azure IoT explorer](../iot/howto-use-iot-explorer.md) tool to view the telemetry, properties, and commands defined in the DTDL model. If you have existing sensors attached to a Windows or Linux gateway, you can use [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md), to connect these sensors and create IoT Plug and Play devices without the need to write device software/firmware (for [supported protocols](./concepts-iot-pnp-bridge.md#supported-protocols-and-sensors)). |
iot-develop | Set Up Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/set-up-environment.md | The first time you run the tool, you're prompted for the IoT hub connection stri Configure the tool to use the model files you downloaded previously. From the home page in the tool, select **IoT Plug and Play Settings**, then **+ Add > Local folder**. Select the *models* folder you created previously. Then select **Save** to save the settings. -To learn more, see [Install and use Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md). +To learn more, see [Install and use Azure IoT explorer](../iot/howto-use-iot-explorer.md). ## Clean up resources |
iot-dps | Concepts Deploy At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-deploy-at-scale.md | if (provisioningDetails != null) ## IoT Hub connectivity considerations - Any single IoT hub is limited to 1 million devices plus modules. If you plan to have more than a million devices, cap the number of devices to 1 million per hub and add hubs as needed when increasing the scale of your deployment. For more information, see [IoT Hub quotas](../iot-hub/iot-hub-devguide-quotas-throttling.md).-- If you have plans for more than a million devices and you need to support them in a specific region (such as in an EU region for data residency requirements), you can [contact us](../iot-fundamentals/iot-support-help.md) to ensure that the region you're deploying to has the capacity to support your current and future scale.+- If you have plans for more than a million devices and you need to support them in a specific region (such as in an EU region for data residency requirements), you can [contact us](../iot/iot-support-help.md) to ensure that the region you're deploying to has the capacity to support your current and future scale. Recommended device logic when connecting to IoT Hub via DPS: |
iot-dps | Concepts Device Oem Security Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-oem-security-practices.md | For more information, see [provisioning](about-iot-dps.md#provisioning-process) ## Resources In addition to the recommended security practices in this article, Azure IoT provides resources to help with selecting secure hardware and creating secure IoT deployments: -- Azure IoT [security best practices](../iot-fundamentals/iot-security-best-practices.md) to guide the deployment process. +- Azure IoT [security best practices](../iot/iot-security-best-practices.md) to guide the deployment process. - The [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/) offers a service to help create secure IoT deployments. - For help with evaluating your hardware environment, see the whitepaper [Evaluating your IoT Security](https://download.microsoft.com/download/D/3/9/D3948E3C-D5DC-474E-B22F-81BA8ED7A446/Evaluating_Your_IOT_Security_whitepaper_EN_US.pdf). - For help with selecting secure hardware, see [The Right Secure Hardware for your IoT Deployment](https://download.microsoft.com/download/C/0/5/C05276D6-E602-4BB1-98A4-C29C88E57566/The_right_secure_hardware_for_your_IoT_deployment_EN_US.pdf). |
iot-edge | Module Deployment Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-deployment-monitoring.md | Azure IoT Edge provides two ways to configure the modules to run on IoT Edge dev You can't combine per-device and automatic deployments. Once you start targeting IoT Edge devices with automatic deployments (with or without layered deployments), per-device deployments are no longer supported. -This article focuses on configuring and monitoring fleets of devices, collectively referred to as *IoT Edge automatic deployments*. The basic deployment steps are as follows: +This article focuses on configuring and monitoring fleets of devices, collectively referred to as *IoT Edge automatic deployments*. -1. An operator defines a deployment that describes a set of modules and the target devices. Each deployment has a deployment manifest that reflects this information. -2. The IoT Hub service communicates with all targeted devices to configure them with the declared modules. -3. The IoT Hub service retrieves status from the IoT Edge devices and makes them available to the operator.  For example, an operator can see when an Edge device isn't configured successfully or if a module fails during runtime. -4. At any time, new IoT Edge devices that meet the targeting conditions are configured for the deployment. +The basic deployment steps are as follows: ++1. An operator defines a deployment manifest that describes a set of modules and the target devices. +2. As a result, the IoT Hub service communicates with all targeted devices to configure them with the declared modules. +3. The IoT Hub service retrieves status from the IoT Edge devices and makes them available to the operator. For example, an operator can see when an Edge device isn't configured successfully or if a module fails during runtime. +4. At any time, when newly targeted IoT Edge devices come online and connect with IoT Hub, they're configured for the deployment. This article describes each component involved in configuring and monitoring a deployment. For a walkthrough of creating and updating a deployment, see [Deploy and monitor IoT Edge modules at scale](how-to-deploy-at-scale.md). ## Deployment -An IoT Edge automatic deployment assigns IoT Edge module images to run as instances on a targeted set of IoT Edge devices. It works by configuring an IoT Edge deployment manifest to include a list of modules with the corresponding initialization parameters. A deployment can be assigned to a single device (based on Device ID) or to a group of devices (based on tags). Once an IoT Edge device receives a deployment manifest, it downloads and installs the container images from the respective container repositories, and configures them accordingly. Once a deployment is created, an operator can monitor the deployment status to see whether targeted devices are correctly configured. +An IoT Edge automatic deployment assigns IoT Edge module images to run as instances on a targeted set of IoT Edge devices. The automated deployment configures an IoT Edge deployment manifest to include a list of modules with the corresponding initialization parameters. A deployment can be assigned to a single device (based on Device ID) or to a group of devices (based on tags). Once an IoT Edge device receives a deployment manifest, it downloads and installs the container images from the respective container repositories, and configures them accordingly. Once a deployment is created, an operator can monitor the deployment status to see whether targeted devices are correctly configured. Only IoT Edge devices can be configured with a deployment. The following prerequisites must be on the device before it can receive the deployment: Only IoT Edge devices can be configured with a deployment. The following prerequ ### Deployment manifest -A deployment manifest is a JSON document that describes the modules to be configured on the targeted IoT Edge devices. It contains the configuration metadata for all the modules, including the required system modules (specifically the IoT Edge agent and IoT Edge hub).  +A deployment manifest is a JSON document that describes the modules to be configured on the targeted IoT Edge devices. It contains the configuration metadata for all the modules, including the required system modules (specifically the IoT Edge agent and IoT Edge hub). The configuration metadata for each module includes: * Version * Type-* Status (for example, running or stopped) +* Status (for example, *Running* or *Stopped*) * Restart policy * Image and container registry * Routes for data input and output If the module image is stored in a private container registry, the IoT Edge agen ### Target condition -The target condition is continuously evaluated throughout the lifetime of the deployment. Any new devices that meet the requirements are included, and any existing devices that no longer do are removed. The deployment is reactivated if the service detects any target condition change. +The target device condition is continuously evaluated throughout the lifetime of the deployment. Any new devices that meet the requirements are included, and any existing devices that no longer meet requirements are removed. The deployment is reactivated if the service detects any target condition change. -For example, you have a deployment with a target condition tags.environment = 'prod'. When you kick off the deployment, there are 10 production devices. The modules are successfully installed in these 10 devices. The IoT Edge agent status shows 10 total devices, 10 successful responses, 0 failure responses, and 0 pending responses. Now you add five more devices with tags.environment = 'prod'. The service detects the change and the IoT Edge agent status becomes 15 total devices, 10 successful responses, 0 failure responses, and 5 pending responses while it deploys to the five new devices. +For example, you have a deployment with a target condition `tags.environment = 'prod'`. When you initiate the deployment, there are 10 production devices. The modules are successfully installed in these 10 devices. The IoT Edge agent status shows 10 total devices, 10 successful responses, 0 failure responses, and 0 pending responses. Now you add five more devices with `tags.environment = 'prod'`. The service detects the change and the IoT Edge agent status now shows 15 total devices, 10 successful responses, 0 failure responses, and 5 pending responses while it deploys to the five new devices. -If a deployment has no target condition, then it is applied to no devices. +If a deployment has no target condition, then it's applied to no devices. -Use any Boolean condition on device twin tags, device twin reported properties, or deviceId to select the target devices. If you want to use condition with tags, you need to add "tags":{} section in the device twin under the same level as properties. [Learn more about tags in device twin](../iot-hub/iot-hub-devguide-device-twins.md) +Use any Boolean condition on device twin tags, device twin reported properties, or deviceId to select the target devices. If you want to use a condition with tags, you need to add a `"tags":{}` section in the device twin under the same level as properties. [Learn more about tags in a device twin](../iot-hub/iot-hub-devguide-device-twins.md). Examples of target conditions: Examples of target conditions: * tags.environment = 'prod' OR tags.location = 'westus' * tags.operator = 'John' AND tags.environment = 'prod' AND NOT deviceId = 'linuxprod1' * properties.reported.devicemodel = '4000x'-* \[none] +* [none] Consider these constraints when you construct a target condition: -* In device twin, you can only build a target condition using tags, reported properties, or deviceId. +* In the device twin, you can only build a target condition using tags, reported properties, or deviceId. * Double quotes aren't allowed in any portion of the target condition. Use single quotes. * Single quotes represent the values of the target condition. Therefore, you must escape the single quote with another single quote if it's part of the device name. For example, to target a device called `operator'sDevice`, write `deviceId='operator''sDevice'`. * Numbers, letters, and the following characters are allowed in target condition values: `"()<>@,;:\\"/?={} \t\n\r`. Consider these constraints when you construct a target condition: ### Priority -A priority defines whether a deployment should be applied to a targeted device relative to other deployments. A deployment priority is a positive integer, with larger numbers denoting higher priority. If an IoT Edge device is targeted by more than one deployment, the deployment with the highest priority applies.  Deployments with lower priorities are not applied, nor are they merged.  If a device is targeted with two or more deployments with equal priority, the most recently created deployment (determined by the creation timestamp) applies. +A priority defines whether a deployment should be applied to a targeted device relative to other deployments. A deployment priority is a positive integer, with larger numbers denoting higher priority. If an IoT Edge device is targeted by more than one deployment, the deployment with the highest priority applies. Deployments with lower priorities are not applied, nor are they merged. If a device is targeted with two or more deployments with equal priority, the most recently created deployment (determined by the creation timestamp) applies. ### Labels -Labels are string key/value pairs that you can use to filter and group deployments. A deployment may have multiple labels. Labels are optional and don't impact the actual configuration of IoT Edge devices. +Labels are string key/value pairs that you can use to filter and group deployments. A deployment may have multiple labels. Labels are optional and don't impact the configuration of IoT Edge devices. ### Metrics By default, all deployments report on four metrics: * **Targeted** shows the IoT Edge devices that match the Deployment targeting condition.-* **Applied** shows the targeted IoT Edge devices that are not targeted by another deployment of higher priority. -* **Reporting Success** shows the IoT Edge devices that have reported that the modules have been deployed successfully. -* **Reporting Failure** shows the IoT Edge devices that have reported that one or more modules haven't been deployed successfully. To further investigate the error, connect remotely to those devices and view the log files. +* **Applied** shows the targeted IoT Edge devices that aren't targeted by another deployment of higher priority. +* **Reporting Success** shows the IoT Edge devices that report their modules as deployed successfully. +* **Reporting Failure** shows the IoT Edge devices that report one or more modules as deployed unsuccessfully. To further investigate the error, connect remotely to those devices and view the log files. Additionally, you can define your own custom metrics to help monitor and manage the deployment. -Metrics provide summary counts of the various states that devices may report back as a result of applying a deployment configuration. Metrics can query [edgeHub module twin reported properties](module-edgeagent-edgehub.md#edgehub-reported-properties), like *lastDesiredStatus* or *lastConnectTime*. For example: +Metrics provide summary counts of the various states that devices may report back as a result of applying a deployment configuration. Metrics can query [edgeHub module twin reported properties](module-edgeagent-edgehub.md#edgehub-reported-properties), like *lastDesiredStatus* or *lastConnectTime*. ++For example: ```sql SELECT deviceId FROM devices Adding your own metrics is optional, and doesn't impact the actual configuration Layered deployments are automatic deployments that can be combined together to reduce the number of unique deployments that need to be created. Layered deployments are useful in scenarios where the same modules are reused in different combinations in many automatic deployments. -Layered deployments have the same basic components as any automatic deployment. They target devices based on tags in the device twins, and provide the same functionality around labels, metrics, and status reporting. Layered deployments also have priorities assigned to them, but instead of using the priority to determine which deployment is applied to a device, the priority determines how multiple deployments are ranked on a device. For example, if two layered deployments have a module or a route with the same name, the layered deployment with the higher priority will be applied while the lower priority is overwritten. +Layered deployments have the same basic components as any automatic deployment. They target devices based on tags in the device twins and provide the same functionality around labels, metrics, and status reporting. Layered deployments also have priorities assigned to them. Instead of using the priority to determine which deployment is applied to a device, the priority determines how multiple deployments are ranked on a device. For example, if two layered deployments have a module or a route with the same name, the layered deployment with the higher priority will be applied while the lower priority is overwritten. -The system runtime modules, edgeAgent and edgeHub, are not configured as part of a layered deployment. Any IoT Edge device targeted by a layered deployment needs a standard automatic deployment applied to it first. The automatic deployment provides the base upon which layered deployments can be added. +The system runtime modules, known as edgeAgent and edgeHub, are not configured as part of a layered deployment. Any IoT Edge device targeted by a layered deployment, first needs a standard automatic deployment applied to it. The automatic deployment provides the base upon which layered deployments can be added. An IoT Edge device can apply one and only one standard automatic deployment, but it can apply multiple layered automatic deployments. Any layered deployments targeting a device must have a higher priority than the automatic deployment for that device. -For example, consider the following scenario of a company that manages buildings. They developed IoT Edge modules for collecting data from security cameras, motion sensors, and elevators. However, not all their buildings can use all three modules. With standard automatic deployments, the company needs to create individual deployments for all the module combinations that their buildings need. +For example, consider the following scenario of a company that manages buildings. The company developed IoT Edge modules for collecting data from security cameras, motion sensors, and elevators. However, not all their buildings can use all three modules. With standard automatic deployments, the company needs to create individual deployments for all the module combinations that their buildings need. - -However, once the company switches to layered automatic deployments they find that they can create the same module combinations for their buildings with fewer deployments to manage. Each module has its own layered deployment, and the device tags identify which modules get added to each building. +However, once the company switches to layered automatic deployments, they can create the same module combinations for their buildings with fewer deployments to manage. Each module has its own layered deployment, and the device tags identify which modules get added to each building. - ### Module twin configuration When you work with layered deployments, you may, intentionally or otherwise, have two deployments with the same module targeting a device. In those cases, you can decide whether the higher priority deployment should overwrite the module twin or append to it. For example, you may have a deployment that applies the same module to 100 different devices. However, 10 of those devices are in secure facilities and need additional configuration in order to communicate through proxy servers. You can use a layered deployment to add module twin properties that enable those 10 devices to communicate securely without overwriting the existing module twin information from the base deployment. -You can append module twin desired properties in the deployment manifest. Where in a standard deployment you would add properties in the **properties.desired** section of the module twin, in a layered deployment you can declare a new subset of desired properties. +You can append module twin desired properties in the deployment manifest. In a standard deployment, you would add properties in the **properties.desired** section of the module twin. But in a layered deployment, you can declare a new subset of desired properties. For example, in a standard deployment you might add the simulated temperature sensor module with the following desired properties that tell it to send data in 5-second intervals: For example, in a standard deployment you might add the simulated temperature se } ``` -In a layered deployment that targets some or all of the same devices, you could add a property that tells the simulated sensor to send 1000 messages and then stop. You don't want to overwrite the existing properties, so you create a new section within the desired properties called `layeredProperties`, which contains the new property: +In a layered deployment that targets some or all of these same devices, you could add a property that tells the simulated sensor to send 1000 messages and then stop. You don't want to overwrite the existing properties, so you create a new section within the desired properties called `layeredProperties`, which contains the new property: ```json "SimulatedTemperatureSensor": { A device that has both deployments applied will reflect the following properties } ``` -If you do set the `properties.desired` field of the module twin in a layered deployment, it will overwrite the desired properties for that module in any lower priority deployments. +If you set the `properties.desired` field of the module twin in a layered deployment, `properties.desired` will overwrite the desired properties for that module in any lower priority deployments. ## Phased rollout -A phased rollout is an overall process whereby an operator deploys changes to a broadening set of IoT Edge devices. The goal is to make changes gradually to reduce the risk of making wide scale breaking changes. Automatic deployments help manage phased rollouts across a fleet of IoT Edge devices. +A phased rollout is an overall process whereby an operator deploys changes to a broadening set of IoT Edge devices. The goal is to make changes gradually to reduce the risk of making wide-scale breaking changes. Automatic deployments help manage phased rollouts across a fleet of IoT Edge devices. A phased rollout is executed in the following phases and steps: -1. Establish a test environment of IoT Edge devices by provisioning them and setting a device twin tag like `tag.environment='test'`. The test environment should mirror the production environment that the deployment will eventually target. +1. Establish a test environment of IoT Edge devices by provisioning them and setting a device twin tag like `tag.environment='test'`. The test environment should mirror the production environment that the deployment will eventually target. 2. Create a deployment including the desired modules and configurations. The targeting condition should target the test IoT Edge device environment. 3. Validate the new module configuration in the test environment.-4. Update the deployment to include a subset of production IoT Edge devices by adding a new tag to the targeting condition. Also, ensure that the priority for the deployment is higher than other deployments currently targeted to those devices -5. Verify that the deployment succeeded on the targeted IoT Devices by viewing the deployment status. +4. Update the deployment to include a subset of production IoT Edge devices by adding a new tag to the targeting condition. Also, ensure that the priority for the deployment is higher than other deployments currently targeted to those devices. +5. Verify that the deployment succeeded on the targeted IoT Edge devices by viewing the deployment status. 6. Update the deployment to target all remaining production IoT Edge devices. ## Rollback -Deployments can be rolled back if you receive errors or misconfigurations. Because a deployment defines the absolute module configuration for an IoT Edge device, an additional deployment must also be targeted to the same device at a lower priority even if the goal is to remove all modules.  +Deployments can be rolled back if you receive errors or misconfigurations. Because a deployment defines the absolute module configuration for an IoT Edge device, an additional deployment must also be targeted to the same device at a lower priority even if the goal is to remove all modules. Deleting a deployment doesn't remove the modules from targeted devices. There must be another deployment that defines a new configuration for the devices, even if it's an empty deployment. |
iot-edge | Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md | IoT Edge uses the Microsoft.Azure.Devices.Client SDK. For more information, see | IoT Edge version | Microsoft.Azure.Devices.Client SDK version | ||--| | 1.4 | 1.36.6 |-| 1.3 | 1.36.6 | -| 1.2.0 | 1.33.4-NestedEdge | -| 1.1 (LTS) | 1.28.0 | -| 1.0.10 | 1.28.0 | -| 1.0.9 | 1.21.1 | -| 1.0.8 | 1.20.3 | -| 1.0.7 | 1.20.1 | -| 1.0.6 | 1.17.1 | -| 1.0.5 | 1.17.1 | ## Virtual Machines |
iot-edge | Tutorial Deploy Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md | description: In this tutorial, you develop an Azure Function as an IoT Edge modu Previously updated : 05/11/2022 Last updated : 3/22/2023 You can use Azure Functions to deploy code that implements your business logic d > * Deploy the module from the container registry to your IoT Edge device. > * View filtered data. -<center> - -</center> The Azure Function that you create in this tutorial filters the temperature data that's generated by your device. The Function only sends messages upstream to Azure IoT Hub when the temperature is above a specified threshold. The Azure Function that you create in this tutorial filters the temperature data ## Prerequisites -Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place: +Before beginning this tutorial, do the tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). After completing that tutorial, you should have the following prerequisites in place: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.-* An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). +* An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstart to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions. * Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. -To develop an IoT Edge module in with Azure Functions, install the following additional prerequisites on your development machine: +To develop an IoT Edge module with Azure Functions, install additional prerequisites on your development machine: * [C# for Visual Studio Code (powered by OmniSharp) extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp). * [The .NET Core SDK](https://dotnet.microsoft.com/download). The Azure IoT Edge for Visual Studio Code that you installed in the prerequisite ### Create a new project -Create a C# Function solution template that you can customize with your own code. +Follow these steps to create a C# Function solution template that's customizable. 1. Open Visual Studio Code on your development machine. 2. Open the Visual Studio Code command palette by selecting **View** > **Command Palette**. -3. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. Follow the prompts in the command palette to create your solution. +3. In the command palette, add and run the command **Azure IoT Edge: New IoT Edge solution**. Follow these prompts in the command palette to create your solution: - | Field | Value | - | -- | -- | - | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. | - | Provide a solution name | Enter a descriptive name for your solution, like **FunctionSolution**, or accept the default. | - | Select module template | Choose **Azure Functions - C#**. | - | Provide a module name | Name your module **CSharpFunction**. | - | Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server from the Overview page of your container registry in the Azure portal. The final string looks like \<registry name\>.azurecr.io/CSharpFunction. | + * Select a folder: choose the location on your development machine for Visual Studio Code to create the solution files. + * Provide a solution name: add a descriptive name for your solution, like **FunctionSolution**, or accept the default.| + * Select a module template: choose **Azure Functions - C#**. + * Provide a module name | Name your module **CSharpFunction**. + * Provide a Docker image repository for the module. An image repository includes the name of your container registry and the name of your container image. Your container image is pre-populated from the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the **Login server** from the **Overview** page of your container registry in the Azure portal. The final string looks like \<registry name\>.azurecr.io/csharpfunction. -  + :::image type="content" source="./media/tutorial-deploy-function/repository.png" alt-text="Screenshot showing where to add your Docker image repository name in Visual Studio Code."::: ### Add your registry credentials -The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device. +The environment file of your solution stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto your IoT Edge device. -The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now: +The IoT Edge extension in Visual Studio Code tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already in the file. If not, add them now: -1. In the Visual Studio Code explorer, open the .env file. -2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. +1. In the Visual Studio Code explorer, open the `.env` file. +2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. You can find them again by going to your container registry in Azure and looking on the **Settings** > **Access keys** page. 3. Save this file. >[!NOTE] The IoT Edge extension tries to pull your container registry credentials from Az Running Azure Functions modules on IoT Edge is supported only on Linux AMD64 based containers. The default target architecture for Visual Studio Code is Linux AMD64, but we will set it explicitly to Linux AMD64 here. -1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window. +1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**. 2. In the command palette, select the AMD64 target architecture from the list of options. ### Update the module with custom code -Let's add some additional code so that the module processes the messages at the edge before forwarding them to IoT Hub. +Let's add some additional code so your **CSharpFunction** module processes the messages at the edge before forwarding them to IoT Hub. -1. In Visual Studio Code, open **modules** > **CSharpFunction** > **CSharpFunction.cs**. +1. In the Visual Studio Code explorer, open **modules** > **CSharpFunction** > **CSharpFunction.cs**. 1. Replace the contents of the **CSharpFunction.cs** file with the following code. This code receives telemetry about ambient and machine temperature, and only forwards the message on to IoT Hub if the machine temperature is above a defined threshold. In the previous section, you created an IoT Edge solution and modified the **CSh Visual Studio Code outputs a success message when your container image is pushed to your container registry. If you want to confirm the successful operation for yourself, you can view the image in the registry. 1. In the Azure portal, browse to your Azure container registry.-2. Select **Repositories**. +2. Select **Services** > **Repositories**. 3. You should see the **csharpfunction** repository in the list. Select this repository to see more details. 4. In the **Tags** section, you should see the **0.0.1-amd64** tag. This tag indicates the version and platform of the image that you built. These values are set in the module.json file in the CSharpFunction folder. ## Deploy and run the solution -You can use the Azure portal to deploy your Function module to an IoT Edge device like you did in the quickstarts. You can also deploy and monitor modules from within Visual Studio Code. The following sections use the Azure IoT Edge and IoT Hub for Visual Studio Code that was listed in the prerequisites. Install the extension now, if you didn't already. +You can use the Azure portal to deploy your Function module to an IoT Edge device like you did in the quickstart. You can also deploy and monitor modules from within Visual Studio Code. The following sections use the Azure IoT Edge and IoT Hub for Visual Studio Code that was listed in the prerequisites. Install the extensions now, if you haven't already. 1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices. You can use the Azure portal to deploy your Function module to an IoT Edge devic It may take a few moments for the new modules to show up. Your IoT Edge device has to retrieve its new deployment information from IoT Hub, start the new containers, and then report the status back to IoT Hub. -  + :::image type="content" source="./media/tutorial-deploy-function/view-modules.png" alt-text="Screenshot showing how to view deployed modules in Visual Studio Code."::: ## View the generated data -You can see all of the messages that arrive at your IoT hub by running **Azure IoT Hub: Start Monitoring Built-in Event Endpoint** in the command palette. +You can see all of the messages that arrive at your IoT hub from all your devices by running **Azure IoT Hub: Start Monitoring Built-in Event Endpoint** in the command palette. To stop monitoring messages, run the command **Azure IoT Hub: Stop Monitoring Built-in Event Endpoint** in the command palette. -You can also filter the view to see all of the messages that arrive at your IoT hub from a specific device. Right-click the device in the **Azure IoT Hub Devices** section and select **Start Monitoring Built-in Event Endpoint**. --To stop monitoring messages, run the command **Azure IoT Hub: Stop Monitoring Built-in Event Endpoint** in the command palette. +You can also filter the view to see all of the messages that arrive at your IoT hub from a specific device. Right-click the device in the **Azure IoT Hub** > **Devices** section of the Visual Studio Code explorer and select **Start Monitoring Built-in Event Endpoint**. ## Clean up resources |
iot-hub | Iot Concepts And Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md | To try out an end-to-end IoT solution, check out the IoT Hub quickstarts: To learn more about the ways you can build and deploy IoT solutions with Azure IoT, visit: - [What is Azure IoT device and application development](../iot-develop/about-iot-develop.md)-- [Fundamentals: Azure IoT technologies and solutions](../iot-fundamentals/iot-services-and-technologies.md)+- [Fundamentals: Azure IoT technologies and solutions](../iot/iot-services-and-technologies.md) |
iot | Howto Use Iot Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/howto-use-iot-explorer.md | + + Title: Install and use Azure IoT explorer | Microsoft Docs +description: Install the Azure IoT explorer tool and use it to interact with IoT Plug and Play devices connected to IoT hub. Although this article focuses on working with IoT Plug and Play devices, you can use the tool with any device connected to your hub. ++ Last updated : 08/23/2022++++++#Customer intent: As a solution builder, I want to use a GUI tool to interact with IoT Plug and Play devices connected to an IoT hub to test and verify their behavior. +++# Install and use Azure IoT explorer ++The Azure IoT explorer is a graphical tool for interacting with devices connected to your IoT hub. This article focuses on using the tool to test your IoT Plug and Play devices. After installing the tool on your local machine, you can use it to connect to a hub. You can use the tool to view the telemetry the devices are sending, work with device properties, and invoke commands. ++This article shows you how to: ++- Install and configure the Azure IoT explorer tool. +- Use the tool to interact with and test your IoT Plug and Play devices. ++For more general information about using the tool, see the GitHub [readme](https://github.com/Azure/azure-iot-explorer/blob/master/README.md). ++To use the Azure IoT explorer tool, you need: ++- An Azure IoT hub. There are many ways to add an IoT hub to your Azure subscription, such as [Creating an IoT hub by using the Azure CLI](../iot-hub/iot-hub-create-using-cli.md). You need the IoT hub connection string to run the Azure IoT explorer tool. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- A device registered in your IoT hub. You can use IoT Explorer to create and manage device registrations in your IoT Hub. ++## Install Azure IoT explorer ++Go to [Azure IoT explorer releases](https://github.com/Azure/azure-iot-explorer/releases) and expand the list of assets for the most recent release. Download and install the most recent version of the application. ++>[!Important] +> Update to version 0.13.x to resolve models from any repository based on [https://github.com/Azure/iot-plugandplay-models](https://github.com/Azure/iot-plugandplay-models) ++## Use Azure IoT explorer ++For a device, you can either connect your own device, or use one of the sample simulated devices. For some example simulated devices written in different languages, see the [Connect a sample IoT Plug and Play device application to IoT Hub](../iot-develop/tutorial-connect-device.md) tutorial. ++### Connect to your hub ++The first time you run Azure IoT explorer, you're prompted for your IoT hub's connection string. After you add the connection string, select **Connect**. You can use the tool's settings to switch to another IoT hub by updating the connection string. ++The model definition for an IoT Plug and Play device is stored in either the public repository, the connected device, or a local folder. By default, the tool looks for your model definition in the public repository and your connected device. You can add and remove sources, or configure the priority of the sources in **Settings**: ++To add a source: ++1. Go to **Home/IoT Plug and Play Settings** +2. Select **Add** and choose your source, from a repository or local folder. ++To remove a source: ++1. Go to **Home/IoT Plug and Play Settings** +2. Find the source you want to remove. +3. Select **X** to remove it. ++Change the source priorities: ++You can drag and drop one of the model definition sources to a different ranking in the list. ++### View devices ++After the tool connects to your IoT hub, it displays the **Devices** list page that lists the device identities registered with your IoT hub. You can select any entry in the list to see more information. ++On the **Devices** list page you can: ++- Select **New** to register a new device with your hub. Then enter a device ID. Use the default settings to automatically generate authentication keys and enable the connection to your hub. +- Select a device and then select **Delete** to delete a device identity. Review the device details before you complete this action to be sure you're deleting the right device identity. ++## Interact with a device ++On the **Devices** list page, select a value in the **Device ID** column to view the detail page for the registered device. For each device there are two sections: **Device** and **Digital Twin**. ++### Device ++This section includes the **Device Identity**, **Device Twin**, **Telemetry**, **Direct method**, **Cloud-to-device message**, **Module Identity** tabs. ++- You can view and update the [device identity](../iot-hub/iot-hub-devguide-identity-registry.md) information on the **Device identity** tab. +- You can access the [device twin](../iot-hub/iot-hub-devguide-device-twins.md) information on the **Device Twin** tab. +- If a device is connected and actively sending data, you can view the [telemetry](../iot-hub/iot-hub-devguide-messages-read-builtin.md) on the **Telemetry** tab. +- You can call a [direct method](../iot-hub/iot-hub-devguide-direct-methods.md) on the device on the **Direct method** tab. +- You can send a [cloud-to-device message](../iot-hub/iot-hub-devguide-messages-c2d.md) on the **Cloud-to-device messages** tab. +- You can access the [module twin](../iot-hub/iot-hub-devguide-module-twins.md) information. ++### IoT Plug and Play components ++If the device is connected to the hub using a **Model ID**, the tool shows the **IoT Plug and Play components** tab where you can see the **Model ID**. ++If the **Model ID** is available in one of the configured sources - Public Repo or Local Folder, the list of components is displayed. Selecting a component shows the properties, commands, and telemetry available. ++On the **Component** page, you can view the read-only properties, update writable properties, invoke commands, and see the telemetry messages produced by this component. +++#### Properties +++You can view the read-only properties defined in an interface on the **Properties (read-only)** tab. You can update the writable properties defined in an interface on the **Properties (writable)** tab: ++1. Go to the **Properties (writable)** tab. +1. Click the property you'd like to update. +1. Enter the new value for the property. +1. Preview the payload to be sent to the device. +1. Submit the change. ++After you submit a change, you can track the update status: **synching**, **success**, or **error**. When the synching is complete, you see the new value of your property in the **Reported Property** column. If you navigate to other pages before the synching completes, the tool still notifies you when the update is complete. You can also use the tool's notification center to see the notification history. ++#### Commands ++To send a command to a device, go to the **Commands** tab: ++1. In the list of commands, expand the command you want to trigger. +1. Enter any required values for the command. +1. Preview the payload to be sent to the device. +1. Submit the command. ++#### Telemetry ++To view the telemetry for the selected interface, go to its **Telemetry** tab. ++#### Known Issues ++For a list of the IoT features supported by the latest version of the tool, see [Feature list](https://github.com/Azure/azure-iot-explorer/wiki). ++## Next steps ++In this how-to article, you learned how to install and use Azure IoT explorer to interact with your IoT Plug and Play devices. A suggested next step is to learn how to [Manage IoT Plug and Play digital twins](../iot-develop/howto-manage-digital-twin.md). |
iot | Iot Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-glossary.md | + + Title: Azure IoT glossary of terms | Microsoft Docs +description: Developer guide - a glossary explaining some of the common terms used in the Azure IoT articles. +++++ Last updated : 08/26/2022++# Generated from YAML source. +++# Glossary of IoT terms ++This article lists some of the common terms used in the IoT articles. ++## A ++### Advanced Message Queueing Protocol ++One of the messaging protocols that [IoT Hub](#iot-hub) and IoT Central support for communicating with [devices](#device). ++[Learn more](../iot-hub/iot-hub-devguide-protocols.md) ++Casing rules: Always *Advanced Message Queueing Protocol*. ++First and subsequent mentions: Depending on the context spell out in full. Otherwise use the abbreviation AMQP. ++Abbreviation: AMQP ++Applies to: Iot Hub, IoT Central, Device developer ++### Allocation policy ++In the [Device Provisioning Service](#device-provisioning-service), the allocation policy determines how the service assigns [devices](#device) to a [Linked IoT hub](#linked-iot-hub). ++Casing rules: Always lowercase. ++Applies to: Device Provisioning Service ++### Attestation mechanism ++In the [Device Provisioning Service](#device-provisioning-service), the attestation mechanism is the method used to confirm a [device](#device)'s identity. The attestation mechanism is configured on an [enrollment](#enrollment). ++Attestation mechanisms include X.509 certificates, Trusted Platform [Modules](#module), and symmetric keys. ++Casing rules: Always lowercase. ++Applies to: Device Provisioning Service ++### Automatic deployment ++A feature in [IoT Edge](#iot-edge) that configures a target set of [IoT Edge devices](#iot-edge-device) to run a set of IoT Edge [modules](#module). Each deployment continuously ensures that all [devices](#device) that match its [target condition](#target-condition) are running the specified set of modules, even when new devices are created or are modified to match the target condition. Each IoT Edge device only receives the highest priority deployment whose target condition it meets. ++[Learn more](../iot-edge/module-deployment-monitoring.md) ++Casing rules: Always lowercase. ++Applies to: IoT Edge ++### Automatic device configuration ++A feature of [IoT Hub](#iot-hub) that enables your [solution](#solution) back end to assign [desired properties](#desired-properties) to a set of [device twins](#device-twin) and report [device](#device) status using system and custom metrics. ++[Learn more](../iot-hub/iot-hub-automatic-device-management.md) ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Automatic device management ++A feature of [IoT Hub](#iot-hub) that automates many of the repetitive and complex tasks of managing large [device](#device) fleets over the entirety of their lifecycles. The feature lets you target a set of devices based on their [properties](#properties), define a [desired configuration](#desired-configuration), and let IoT Hub update devices whenever they come into scope. ++Consists of [automatic device configurations](../iot-hub/iot-hub-automatic-device-management.md) and [IoT Edge automatic deployments](../iot-edge/how-to-deploy-at-scale.md). ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Azure Certified Device program ++Azure Certified [Device](#device) is a free program that enables you to differentiate, certify, and promote your IoT devices built to run on Azure. ++[Learn more](../certification/overview.md) ++Casing rules: Always capitalize as *Azure Certified Device*. ++Applies to: Iot Hub, IoT Central ++### Azure Digital Twins ++A platform as a service (PaaS) offering for creating digital representations of real-world things, places, business processes, and people. Build twin graphs that represent entire environments, and use them to gain insights to drive better products, optimize operations and costs, and create breakthrough customer experiences. ++[Learn more](../digital-twins/overview.md) ++Casing rules: Always capitalize when you're referring to the service. ++First and subsequent mentions: When you're referring to the service, always spell out in full as *Azure Digital Twins*. ++Example usage: The data in your *Azure Digital Twins* model can be routed to downstream Azure services for more analytics or storage. ++Applies to: Digital Twins ++### Azure Digital Twins instance ++A single instance of the [Azure Digital Twins](#azure-digital-twins) service in a customer's subscription. While Azure [Digital Twins](#digital-twin) refers to the Azure service as a whole, your Azure Digital Twins *instance* is your individual Azure Digital Twins resource. ++Casing rules: Always capitalize the service name. ++First and subsequent mentions: Always spell out in full as *Azure Digital Twins instance*. ++Applies to: Digital Twins ++### Azure IoT Explorer ++A tool you can use to view the [telemetry](#telemetry) the [device](#device) is sending, work with device [properties](#properties), and call [commands](#command). You can also use the explorer to interact with and test your devices, and to manage [IoT Plug and Play devices](#iot-plug-and-play-device). ++[Learn more](https://github.com/Azure/azure-iot-explorer) ++Casing rules: Always capitalize as *Azure IoT Explorer*. ++Applies to: Iot Hub, Device developer ++### Azure IoT Tools ++A cross-platform, open-source, Visual Studio Code extension that helps you manage Azure [IoT Hub](#iot-hub) and [devices](#device) in VS Code. With Azure IoT Tools, IoT developers can easily develop an IoT project in VS Code ++Casing rules: Always capitalize as *Azure IoT Tools*. ++Applies to: Iot Hub, IoT Edge, IoT Central, Device developer ++### Azure IoT device SDKs ++These SDKS, available for multiple languages, enable you to create [device apps](#device-app) that interact with an [IoT hub](#iot-hub) or an IoT Central application. ++[Learn more](../iot-develop/about-iot-sdks.md) ++Casing rules: Always refer to as *Azure IoT device SDKs*. ++First and subsequent mentions: On first mention, always use *Azure IoT device SDKs*. On subsequent mentions abbreviate to *device SDKs*. ++Example usage: The *Azure IoT device SDKs* are a set of device client libraries, developer guides, samples, and documentation. The *device SDKs* help you to programmatically connect devices to Azure IoT services. ++Applies to: Iot Hub, IoT Central, Device developer ++### Azure IoT service SDKs ++These SDKs, available for multiple languages, enable you to create [back-end apps](#back-end-app) that interact with an [IoT hub](#iot-hub). ++[Learn more](../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-service-sdks) ++Casing rules: Always refer to as *Azure IoT service SDKs*. ++First and subsequent mentions: On first mention, always use *Azure IoT service SDKs*. On subsequent mentions abbreviate to *service SDKs*. ++Applies to: Iot Hub ++## B ++### Back-end app ++In the context of [IoT Hub](#iot-hub), an app that connects to one of the service-facing [endpoints](#endpoint) on an IoT hub. For example, a back-end app might retrieve [device-to-cloud](#device-to-cloud) messages or manage the [identity registry](#identity-registry). Typically, a back-end app runs in the cloud, but for simplicity many of the tutorials show back-end apps as console apps running on your local development machine. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Built-in endpoints ++[Endpoints](#endpoint) built into [IoT Hub](#iot-hub). For example, every IoT hub includes a built-in endpoint that is Event Hubs-compatible. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++## C ++### Cloud gateway ++A cloud-hosted app that enables connectivity for [devices](#device) that cannot connect directly to [IoT Hub](#iot-hub) or IoT Central. A cloud [gateway](#gateway) is hosted in the cloud in contrast to a [field gateway](#field-gateway) that runs local to your devices. A common use case for a cloud gateway is to implement protocol translation for your devices. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central ++### Cloud property ++A feature in IoT Central that lets your store [device](#device) metadata in the IoT Central application. Cloud [properties](#properties) are defined in the [device template](#device-template), but aren't part of the [device model](#device-model). Cloud properties are never synchronized with a device. ++Casing rules: Always lowercase. ++Applies to: IoT Central ++### Cloud-to-device ++Messages sent from an [IoT hub](#iot-hub) to a connected [device](#device). Often, these messages are [commands](#command) that instruct the device to take an action. ++Casing rules: Always lowercase. ++Abbreviation: Do not use *C2D*. ++Applies to: Iot Hub ++### Command ++A command is defined in an IoT Plug and Play [interface](#interface) to represent a method that can be called on the [digital twin](#digital-twin). For example, a command to reboot a [device](#device). In IoT Central, commands are defined in the [device template](#device-template). ++Applies to: Iot Hub, IoT Central, Device developer ++### Component ++In IoT Plug and Play and [Azure Digital Twins](#azure-digital-twins), components let you build a [model](#model) [interface](#interface) as an assembly of other interfaces. A [device model](#device-model) can combine multiple interfaces as components. For example, a model might include a switch component and thermostat component. Multiple components in a model can also use the same interface type. For example, a model might include two thermostat components. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, Digital Twins, Device developer ++### Configuration ++In the context of [automatic device configuration](#automatic-device-configuration) in [IoT Hub](#iot-hub), it defines the [desired configuration](#desired-configuration) for a set of [devices](#device) twins and provides a set of metrics to report status and progress. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Connection string ++Use in your app code to encapsulate the information required to connect to an [endpoint](#endpoint). A connection string typically includes the address of the endpoint and security information, but connection string formats vary across services. There are two types of connection string associated with the [IoT Hub](#iot-hub) service: ++- *[Device](#device) connection strings* enable devices to connect to the device-facing endpoints on an IoT hub. +- *IoT Hub connection strings* enable [back-end apps](#back-end-app) to connect to the service-facing endpoints on an IoT hub. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, Device developer ++### Custom endpoints ++User-defined [endpoints](#endpoint) on an [IoT hub](#iot-hub) that deliver messages dispatched by a [routing rule](#routing-rule). These endpoints connect directly to an event hub, a Service Bus queue, or a Service Bus topic. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Custom gateway ++Enables connectivity for [devices](#device) that cannot connect directly to [IoT Hub](#iot-hub) or IoT Central. You can use Azure [IoT Edge](#iot-edge) to build custom [gateways](#gateway) that implement custom logic to handle messages, custom protocol conversions, and other processing. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central ++## D ++### Default component ++All [IoT Plug and Play device](#iot-plug-and-play-device) [models](#model) have a default [component](#component). A simple [device model](#device-model) only has a default component - such a model is also known as a no-component [device](#device). A more complex model has multiple components nested below the default component. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device developer ++### Deployment manifest ++A JSON document in [IoT Edge](#iot-edge) that contains the [configuration](#configuration) data for one or more [IoT Edge device](#iot-edge-device) [module twins](#module-twin). ++Casing rules: Always lowercase. ++Applies to: IoT Edge, IoT Central ++### Desired configuration ++In the context of a [device twin](#device-twin), desired [configuration](#configuration) refers to the complete set of [properties](#properties) and metadata in the [device](#device) twin that should be synchronized with the device. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Desired properties ++In the context of a [device twin](#device-twin), desired [properties](#properties) is a subsection of the [device](#device) twin that is used with [reported properties](#reported-properties) to synchronize device [configuration](#configuration) or condition. Desired properties can only be set by a [back-end app](#back-end-app) and are observed by the [device app](#device-app). IoT Central uses the term writable properties. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Device ++In the context of IoT, a device is typically a small-scale, standalone computing device that may collect data or control other devices. For example, a device might be an environmental monitoring device, or a controller for the watering and ventilation systems in a greenhouse. The device catalog provides a list of certified devices. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, IoT Edge, Device Provisioning Service, Device developer ++### Device Provisioning Service ++A helper service for [IoT Hub](#iot-hub) and IoT Central that you use to configure zero-touch [device provisioning](#device-provisioning). With the DPS, you can provision millions of [devices](#device) in a secure and scalable manner. ++Casing rules: Always capitalized as *Device Provisioning Service*. ++First and subsequent mentions: IoT Hub Device Provisioning Service ++Abbreviation: DPS ++Applies to: Iot Hub, Device Provisioning Service, IoT Central ++### Device REST API ++A REST API you can use on a [device](#device) to send [device-to-cloud](#device-to-cloud) messages to an [IoT hub](#iot-hub), and receive [cloud-to-device](#cloud-to-device) messages from an IoT hub. Typically, you should use one of the higher-level [Azure IoT device SDKs](#azure-iot-device-sdks). ++[Learn more](/rest/api/iothub/device) ++Casing rules: Always *device REST API*. ++Applies to: Iot Hub ++### Device app ++A [device](#device) app runs on your device and handles the communication with your [IoT hub](#iot-hub) or IoT Central application. Typically, you use one of the [Azure IoT device SDKs](#azure-iot-device-sdks) when you implement a device app. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device developer ++### Device builder ++The person responsible for creating the code to run on your [devices](#device). Device builders typically use one of the [Azure IoT device SDKs](#azure-iot-device-sdks) to implement the device client. A device builder uses a [device model](#device-model) and [interfaces](#interface) when implementing code to run on an [IoT Plug and Play device](#iot-plug-and-play-device). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, IoT Edge, Device developer ++### Device identity ++A unique identifier assigned to every [device](#device) registered in the [IoT Hub](#iot-hub) [identity registry](#identity-registry) or in an IoT Central application. ++Casing rules: Always lowercase. If you're using the abbreviation, *ID* is all upper case. ++Abbreviation: Device ID ++Applies to: Iot Hub, IoT Central ++### Device management ++[Device](#device) management encompasses the full lifecycle associated with managing the devices in your IoT [solution](#solution) including planning, provisioning, configuring, monitoring, and retiring. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central ++### Device model ++A description, that uses the [Digital Twins Definition Language](#digital-twins-definition-language), of the capabilities of a [device](#device). Capabilities include [telemetry](#telemetry), [properties](#properties), and [commands](#command). ++[Learn more](../iot-develop/concepts-modeling-guide.md) ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device developer, Digital Twins ++### Device provisioning ++The process of adding the initial [device](#device) data to the stores in your [solution](#solution). To enable a new device to connect to your hub, you must add a device ID and keys to the [IoT Hub](#iot-hub) [identity registry](#identity-registry). The [Device Provisioning Service](#device-provisioning-service) can automatically provision devices in an IoT hub or IoT Central application. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device Provisioning Service ++### Device template ++In IoT Central, a [device](#device) template is a blueprint that defines the characteristics and behaviors of a type of device that connects to your application. ++For example, the device template can define the [telemetry](#telemetry) that a device sends so that IoT Central can create visualizations that use the correct units and data types. A [device model](#device-model) is part of the device template. ++Casing rules: Always lowercase. ++Abbreviation: Avoid abbreviating to *template* as IoT Central also has application templates. ++Applies to: IoT Central ++### Device twin ++A [device](#device) twin is JSON document that stores device state information such as metadata, [configurations](#configuration), and conditions. [IoT Hub](#iot-hub) persists a device twin for each device that you provision in your IoT hub. Device twins enable you to synchronize device conditions and configurations between the device and the [solution](#solution) back end. You can query device twins to locate specific devices and for the status of long-running operations. ++See also [Digital twin](#digital-twin) ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Device-to-cloud ++Refers to messages sent from a connected [device](#device) to [IoT Hub](#iot-hub) or IoT Central. ++Casing rules: Always lowercase. ++Abbreviation: Do not use *D2C*. ++Applies to: Iot Hub ++### Digital Twins Definition Language ++A JSON-LD language for describing [models](#model) and [interfaces](#interface) for [IoT Plug and Play devices](#iot-plug-and-play-device) and [Azure Digital Twins](#azure-digital-twins) entities. The language enables the IoT platform and IoT [solutions](#solution) to use the semantics of the entity. ++[Learn more](https://github.com/Azure/opendigitaltwins-dtdl) ++First and subsequent mentions: Spell out in full as *Digital Twins Definition Language*. ++Abbreviation: DTDL ++Applies to: Iot Hub, IoT Central, Digital Twins ++### Digital twin ++A digital twin is a collection of digital data that represents a physical object. Changes in the physical object are reflected in the digital twin. In some scenarios, you can use the digital twin to manipulate the physical object. The [Azure Digital Twins service](../digital-twins/index.yml) uses [models](#model) expressed in the [Digital Twins Definition Language](#digital-twins-definition-language) to represent digital twins of [physical devices](#physical-device) or higher-level abstract business concepts, enabling a wide range of cloud-based digital twin [solutions](#solution). An [IoT Plug and Play](../iot-develop/index.yml) [device](#device) has a digital twin, described by a Digital Twins Definition Language [device model](#device-model). ++See also [Device twin](#device-twin) ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Digital Twins, Device developer ++### Digital twin change events ++When an [IoT Plug and Play device](#iot-plug-and-play-device) is connected to an [IoT hub](#iot-hub), the hub can use its routing capability to send notifications of [digital twin](#digital-twin) changes. The IoT Central data export feature can also forward digital twin change events to other services. For example, whenever a property value changes on a [device](#device), IoT Hub can send a notification to an [endpoint](#endpoint) such as an event hub. ++Casing rules: Always lowercase. ++Abbreviation: Always spell out in full to distinguish from other types of change event. ++Applies to: Iot Hub, IoT Central ++### Digital twin graph ++In the [Azure Digital Twins](#azure-digital-twins) service, you can connect [digital twins](#digital-twin) with [relationships](#relationship) to create knowledge graphs that digitally represent your entire physical environment. A single [Azure Digital Twins instance](#azure-digital-twins-instance) can host many disconnected graphs, or one single interconnected graph. ++Casing rules: Always lowercase. ++First and subsequent mentions: Use *digital twin graph* on first mention, then use *twin graph*. ++Applies to: Iot Hub ++### Direct method ++A way to trigger a method to execute on a [device](#device) by invoking an API on your [IoT hub](#iot-hub). ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Downstream service ++A relative term describing services that receive data from the current context. For example, in the context of [Azure Digital Twins](#azure-digital-twins), Time Series Insights is a downstream service if you set up your data to flow from Azure [Digital Twins](#digital-twin) into Time Series Insights. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Digital Twins ++## E ++### Endpoint ++A named representation of a data routing service that can receive data from other services. ++An [IoT hub](#iot-hub) exposes multiple endpoints that enable your apps to connect to the IoT hub. There are [device](#device)-facing endpoints that enable devices to perform operations such as sending [device-to-cloud](#device-to-cloud) messages. There are service-facing management endpoints that enable [back-end apps](#back-end-app) to perform operations such as [device identity](#device-identity) management. There are service-facing [built-in endpoints](#built-in-endpoints) for reading device-to-cloud messages. You can create [custom endpoints](#custom-endpoints) to receive device-to-cloud messages dispatched by a [routing rule](#routing-rule). ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Enrollment ++In the [Device Provisioning Service](#device-provisioning-service), an enrollment is the record of individual [devices](#device) or groups of devices that may register with a [linked IoT hub](#linked-iot-hub) through autoprovisioning. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device Provisioning Service ++### Enrollment group ++In the [Device Provisioning Service](#device-provisioning-service) and IoT Central, an [enrollment](#enrollment) group identifies a group of [devices](#device) that share an X.509 or symmetric key [attestation mechanism](#attestation-mechanism). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, Device Provisioning Service, IoT Central ++### Event Hubs-compatible endpoint ++An [IoT Hub](#iot-hub) [endpoint](#endpoint) that lets you use any Event Hubs-compatible method to read [device](#device) messages sent to the hub. Event Hubs-compatible methods include the [Event Hubs SDKs](../event-hubs/event-hubs-programming-guide.md) and [Azure Stream Analytics](../stream-analytics/stream-analytics-introduction.md). ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Event handler ++A process that's triggered by the arrival of an event. For example, you can create event handlers by adding event processing code to an Azure function, and sending data to it using [endpoints](#endpoint) and [event routing](#event-routing). ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Event routing ++The process of sending events and their data from one [device](#device) or service to the [endpoint](#endpoint) of another. ++In [Iot Hub](#iot-hub), you can define [routing rules](#routing-rule) to describe how messages should be sent. In [Azure Digital Twins](#azure-digital-twins), event routes are entities that are created for this purpose. Azure [Digital Twins](#digital-twin) event routes can contain filters to limit what types of events are sent to each endpoint. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, Digital Twins ++## F ++### Field gateway ++Enables connectivity for [devices](#device) that can't connect directly to [IoT Hub](#iot-hub) and is typically deployed locally with your devices. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central ++## G ++### Gateway ++A gateway enables connectivity for [devices](#device) that cannot connect directly to [IoT Hub](#iot-hub). See also [field gateway](#field-gateway), [cloud gateway](#cloud-gateway), and [custom gateway](#custom-gateway). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central ++### Gateway device ++An example of a [field gateway](#field-gateway). A [gateway](#gateway) [device](#device) can be standard IoT device or an [IoT Edge device](#iot-edge-device). ++A gateway device enables connectivity for downstream devices that cannot connect directly to [IoT Hub](#iot-hub). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, IoT Edge ++## H ++### Hardware security module ++Used for secure, hardware-based storage of [device](#device) secrets. It's the most secure form of secret storage for a device. A hardware security [module](#module) can store both X.509 certificates and symmetric keys. In the [Device Provisioning Service](#device-provisioning-service), an [attestation mechanism](#attestation-mechanism) can use a hardware security module. ++Casing rules: Always lowercase. ++First and subsequent mentions: Spell out in full on first mention as *hardware security module*. ++Abbreviation: HSM ++Applies to: Iot Hub, Device developer, Device Provisioning Service ++## I ++### ID scope ++A unique value assigned to a [Device Provisioning Service](#device-provisioning-service) instance when it's created. ++IoT Central applications make use of DPS instances and make the ID scope available through the IoT Central UI. ++Casing rules: Always use *ID scope*. ++Applies to: Iot Hub, IoT Central, Device Provisioning Service ++### Identity registry ++A built-in [component](#component) of an [IoT hub](#iot-hub) that stores information about the individual [devices](#device) permitted to connect to the hub. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Individual enrollment ++Identifies a single [device](#device) that the [Device Provisioning Service](#device-provisioning-service) can provision to an [IoT hub](#iot-hub). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, Device Provisioning Service ++### Industry 4.0 ++Refers to the fourth revolution that's occurred in manufacturing. Companies can build connected [solutions](#solution) to manage the manufacturing facility and equipment more efficiently by enabling manufacturing equipment to be cloud connected, allowing remote access and management from the cloud, and enabling OT personnel to have a single pane view of their entire facility. ++Applies to: Iot Hub, IoT Central ++### Interface ++In IoT Plug and Play, an interface describes related capabilities that are implemented by a [IoT Plug and Play device](#iot-plug-and-play-device) or [digital twin](#digital-twin). You can reuse interfaces across different [device models](#device-model). When an interface is used in a [device](#device) [model](#model), it defines a [component](#component) of the device. A simple device only contains a default interface. ++In [Azure Digital Twins](#azure-digital-twins), *interface* may be used to refer to the top-level code item in a [Digital Twins Definition Language](#digital-twins-definition-language) model definition. ++Casing rules: Always lowercase. ++Applies to: Device developer, Digital Twins ++### IoT Edge ++A service and related client libraries and runtime that enables cloud-driven deployment of Azure services and [solution](#solution)-specific code to on-premises [devices](#device). [IoT Edge devices](#iot-edge-device) can aggregate data from other devices to perform computing and analytics before sending the data to the cloud. ++[Learn more](../iot-edge/index.yml) ++Casing rules: Always capitalize as *IoT Edge*. ++First and subsequent mentions: Spell out as *Azure IoT Edge*. ++Applies to: IoT Edge ++### IoT Edge agent ++The part of the [IoT Edge runtime](#iot-edge-runtime) responsible for deploying and monitoring [modules](#module). ++Casing rules: Always capitalize as *IoT Edge agent*. ++Applies to: IoT Edge ++### IoT Edge device ++A [device](#device) that uses containerized [IoT Edge](#iot-edge) [modules](#module) to run Azure services, third-party services, or your own code. On the device, the [IoT Edge runtime](#iot-edge-runtime) manages the modules. You can remotely monitor and manage an IoT Edge device from the cloud. ++Casing rules: Always capitalize as *IoT Edge device*. ++Applies to: IoT Edge ++### IoT Edge hub ++The part of the [IoT Edge runtime](#iot-edge-runtime) responsible for [module](#module) to module, upstream, and downstream communications. ++Casing rules: Always capitalize as *IoT Edge hub*. ++Applies to: IoT Edge ++### IoT Edge runtime ++Includes everything that Microsoft distributes to be installed on an [IoT Edge device](#iot-edge-device). It includes Edge agent, Edge hub, and the [IoT Edge](#iot-edge) security daemon. ++Casing rules: Always capitalize as *IoT Edge runtime*. ++Applies to: IoT Edge ++### IoT Hub ++A fully managed Azure service that enables reliable and secure bidirectional communications between millions of [devices](#device) and a [solution](#solution) back end. For more information, see [What is Azure IoT Hub?](../iot-hub/about-iot-hub.md). Using your Azure subscription, you can create IoT hubs to handle your IoT messaging workloads. ++[Learn more](../iot-hub/about-iot-hub.md) ++Casing rules: When referring to the service, capitalize as *IoT Hub*. When referring to an instance, capitalize as *IoT hub*. ++First and subsequent mentions: Spell out in full as *Azure IoT Hub*. Subsequent mentions can be *IoT Hub*. If the context is clear, use *hub* to refer to an instance. ++Example usage: The Azure IoT Hub service enables secure, bidirectional communication. The device sends data to your IoT hub. ++Applies to: Iot Hub ++### IoT Hub Resource REST API ++An API you can use to manage the [IoT hubs](#iot-hub) in your Azure subscription with operations such as creating, updating, and deleting hubs. ++Casing rules: Always capitalize as *IoT Hub Resource REST API*. ++Applies to: Iot Hub ++### IoT Hub metrics ++A feature in the Azure portal that lets you monitor the state of your [IoT hubs](#iot-hub). IoT Hub metrics enable you to assess the overall health of an IoT hub and the [devices](#device) connected to it. ++Casing rules: Always capitalize as *IoT Hub metrics*. ++Applies to: Iot Hub ++### IoT Hub query language ++A SQL-like language for [IoT Hub](#iot-hub) that lets you query your [jobs](#job), [digital twins](#digital-twin), and [device twins](#device-twin). ++Casing rules: Always capitalize as *IoT Hub query language*. ++First and subsequent mentions: Spell out in full as *IoT Hub query language*, if the context is clear subsequent mentions can be *query language*. ++Applies to: Iot Hub ++### IoT Plug and Play bridge ++An open-source application that enables existing sensors and peripherals attached to Windows or Linux [gateways](#gateway) to connect as [IoT Plug and Play devices](#iot-plug-and-play-device). ++Casing rules: Always capitalize as *IoT Plug and Play bridge*. ++First and subsequent mentions: Spell out in full as *IoT Plug and Play bridge*. If the context is clear, subsequent mentions can be *bridge*. ++Applies to: Iot Hub, Device developer, IoT Central ++### IoT Plug and Play conventions ++A set of conventions that IoT [devices](#device) should follow when they exchange data with a [solution](#solution). ++Casing rules: Always capitalize as *IoT Plug and Play conventions*. ++Applies to: Iot Hub, IoT Central, Device developer ++### IoT Plug and Play device ++Typically a small-scale, standalone computing [device](#device) that collects data or controls other devices, and that runs software or firmware that implements a [device model](#device-model). For example, an IoT Plug and Play device might be an environmental monitoring device, or a controller for a smart-agriculture irrigation system. An IoT Plug and Play device might be implemented directly or as an [IoT Edge](#iot-edge) [module](#module). ++Casing rules: Always capitalize as *IoT Plug and Play device*. ++Applies to: Iot Hub, IoT Central, Device developer ++### IoT extension for Azure CLI ++An extension for the Azure CLI. The extension lets you complete tasks such as managing your [devices](#device) in the [identity registry](#identity-registry), sending and receiving device messages, and monitoring your [IoT hub](#iot-hub) operations. ++[Learn more](/cli/azure/azure-cli-reference-for-IoT) ++Casing rules: Always capitalize as *IoT extension for Azure CLI*. ++Applies to: Iot Hub, IoT Central, IoT Edge, Device Provisioning Service, Device developer ++## J ++### Job ++In the context of [IoT Hub](#iot-hub), jobs let you schedule and track activities on a set of [devices](#device) registered with your IoT hub. Activities include updating [device twin](#device-twin) [desired properties](#desired-properties), updating device twin [tags](#tag), and invoking [direct methods](#direct-method). IoT Hub also uses jobs to import to and export from the [identity registry](#identity-registry). ++In the context of IoT Central, jobs let you manage your connected devices in bulk by setting [properties](#properties) and calling [commands](#command). IoT Central jobs also let you update cloud properties in bulk. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central ++## L ++### Leaf device ++A [device](#device) with no downstream devices connected. Typically leaf devices are connected to a [gateway device](#gateway-device). ++Casing rules: Always lowercase. ++Applies to: IoT Edge, IoT Central, Device developer ++### Lifecycle event ++In [Azure Digital Twins](#azure-digital-twins), this type of event is fired when a data itemΓÇösuch as a [digital twin](#digital-twin), a [relationship](#relationship), or an [event handler](#event-handler) is created or deleted from your [Azure Digital Twins instance](#azure-digital-twins-instance). ++Casing rules: Always lowercase. ++Applies to: Digital Twins, Iot Hub, IoT Central ++### Linked IoT hub ++An [IoT hub](#iot-hub) that is linked to a [Device Provisioning Service](#device-provisioning-service) instance. A DPS instance can register a [device](#device) ID and set the initial [configuration](#configuration) in the [device twins](#device-twin) in linked IoT hubs. ++Casing rules: Always capitalize as *linked IoT hub*. ++Applies to: Iot Hub, Device Provisioning Service ++## M ++### MQTT ++One of the messaging protocols that [IoT Hub](#iot-hub) and IoT Central support for communicating with [devices](#device). MQTT doesn't stand for anything. ++[Learn more](../iot-hub/iot-hub-devguide-protocols.md) ++First and subsequent mentions: MQTT ++Abbreviation: MQTT ++Applies to: Iot Hub, IoT Central, Device developer ++### Model ++A definition of a type of entity in your physical environment, including its [properties](#properties), telemetries, and [components](#component). Models are used to create [digital twins](#digital-twin) that represent specific physical objects of this type. Models are written in the [Digital Twins Definition Language](#digital-twins-definition-language). ++In the [Azure Digital Twins](#azure-digital-twins) service, models define [devices](#device) or higher-level abstract business concepts. In IoT Plug and Play, [device models](#device-model) describe devices specifically. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Digital Twins, Device developer ++### Model ID ++When an [IoT Plug and Play device](#iot-plug-and-play-device) connects to an [IoT Hub](#iot-hub) or IoT Central application, it sends the [model](#model) ID of the [Digital Twins Definition Language](#digital-twins-definition-language) model it implements. Every model as a unique model ID. This model ID enables the [solution](#solution) to find the [device model](#device-model). ++Casing rules: Always capitalize as *model ID*. ++Applies to: Iot Hub, IoT Central, Device developer, Digital Twins ++### Model repository ++Stores [Digital Twins Definition Language](#digital-twins-definition-language) [models](#model) and [interfaces](#interface). A [solution](#solution) uses a [model ID](#model-id) to retrieve a model from a repository. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Digital Twins ++### Model repository REST API ++An API for managing and interacting with a [model repository](#model-repository). For example, you can use the API to add and search for [device models](#device-model). ++Casing rules: Always capitalize as *model repository REST API*. ++Applies to: Iot Hub, IoT Central, Digital Twins ++### Module ++The [IoT Hub](#iot-hub) [device](#device) SDKs let you instantiate modules where each one opens an independent connection to your IoT hub. This lets you use separate namespaces for different [components](#component) on your device. ++[Module identity](#module-identity) and [module twin](#module-twin) provide the same capabilities as [device identity](#device-identity) and [device twin](#device-twin) but at a finer granularity. ++In [IoT Edge](#iot-edge), a module is a Docker container that you can deploy to [IoT Edge devices](#iot-edge-device). It performs a specific task, such as ingesting a message from a device, transforming a message, or sending a message to an IoT hub. It communicates with other modules and sends data to the [IoT Edge runtime](#iot-edge-runtime). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Edge, Device developer ++### Module identity ++A unique identifier assigned to every [module](#module) that belongs to a [device](#device). Module identities are also registered in the [identity registry](#identity-registry). ++The module identity details the security credentials the module uses to authenticate with the [IoT Hub](#iot-hub) or, in the case of an [IoT Edge](#iot-edge) module to the [IoT Edge hub](#iot-edge-hub). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Edge, Device developer ++### Module image ++The docker image the [IoT Edge runtime](#iot-edge-runtime) uses to instantiate [module](#module) instances. ++Casing rules: Always lowercase. ++Applies to: IoT Edge ++### Module twin ++Similar to [device twin](#device-twin), a [module](#module) twin is JSON document that stores module state information such as metadata, [configurations](#configuration), and conditions. [IoT Hub](#iot-hub) persists a module twin for each [module identity](#module-identity) that you provision under a [device identity](#device-identity) in your IoT hub. Module twins enable you to synchronize module conditions and configurations between the module and the [solution](#solution) back end. You can query module twins to locate specific modules and query the status of long-running operations. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++## O ++### Ontology ++In the context of [Digital Twins](#digital-twin), a set of [models](#model) for a particular domain, such as real estate, smart cities, IoT systems, energy grids, and more. Ontologies are often used as schemas for knowledge graphs like the ones in [Azure Digital Twins](#azure-digital-twins), because they provide a starting point based on industry standards and best practices. ++[Learn more](../digital-twins/concepts-ontologies.md) ++Applies to: Digital Twins ++### Operational technology ++That hardware and software in an industrial facility that monitors and controls equipment, processes, and infrastructure. ++Casing rules: Always lowercase. ++Abbreviation: OT ++Applies to: Iot Hub, IoT Central, IoT Edge ++### Operations monitoring ++A feature of [IoT Hub](#iot-hub) that lets you monitor the status of operations on your IoT hub in real time. IoT Hub tracks events across several categories of operations. You can opt into sending events from one or more categories to an IoT Hub [endpoint](#endpoint) for processing. You can monitor the data for errors or set up more complex processing based on data patterns. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++## P ++### Physical device ++A real IoT [device](#device) that connects to an [IoT hub](#iot-hub). For convenience, many tutorials and quickstarts run IoT device code on a desktop machine rather than a physical device. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device developer, IoT Edge ++### Primary and secondary keys ++When you connect to a [device](#device)-facing or service-facing [endpoint](#endpoint) on an [IoT hub](#iot-hub) or IoT Central application, your [connection string](#connection-string) includes key to grant you access. When you add a device to the [identity registry](#identity-registry) or add a [shared access policy](#shared-access-policy) to your hub, the service generates a primary and secondary key. Having two keys enables you to roll over from one key to another when you update a key without losing access to the IoT hub or IoT Central application. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central ++### Properties ++In the context of a [digital twin](#digital-twin), data fields defined in an [interface](#interface) that represent some persistent state of the digital twin. You can declare properties as read-only or writable. Read-only properties, such as serial number, are set by code running on the [IoT Plug and Play device](#iot-plug-and-play-device) itself. Writable properties, such as an alarm threshold, are typically set from the cloud-based IoT [solution](#solution). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Digital Twins, Device developer ++### Property change event ++An event that results from a property change in a [digital twin](#digital-twin). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Digital Twins ++### Protocol gateway ++A [gateway](#gateway) typically deployed in the cloud to provide protocol translation services for [devices](#device) connecting to an [IoT hub](#iot-hub) or IoT Central application. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central ++## R ++### Registration ++A record of a [device](#device) in the [IoT Hub](#iot-hub) [identity registry](#identity-registry). You can register or device directly, or use the [Device Provisioning Service](#device-provisioning-service) to automate device registration. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device Provisioning Service ++### Registration ID ++A unique [device identity](#device-identity) in the [Device Provisioning Service](#device-provisioning-service). The [registration](#registration) ID may be the same value as the [device](#device) identity. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device Provisioning Service ++### Relationship ++Used in the [Azure Digital Twins](#azure-digital-twins) service to connect [digital twins](#digital-twin) into knowledge graphs that digitally represent your entire physical environment. The types of relationships that your twins can have are defined in the [Digital Twins Definition Language](#digital-twins-definition-language) [model](#model). ++Casing rules: Always lowercase. ++Applies to: Digital Twins ++### Reported configuration ++In the context of a [device twin](#device-twin), this refers to the complete set of [properties](#properties) and metadata in the [device](#device) twin that are reported to the [solution](#solution) back end. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, Device developer ++### Reported properties ++In the context of a [device twin](#device-twin), reported [properties](#properties) is a subsection of the [device](#device) twin. Reported properties can only be set by the device but can be read and queried by a [back-end app](#back-end-app). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, Device developer ++### Retry policy ++A way to handle transient errors when you connect to a cloud service. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device developer ++### Routing rule ++A feature of [IoT Hub](#iot-hub) used to route [device-to-cloud](#device-to-cloud) messages to a built-in [endpoint](#endpoint) or to [custom endpoints](#custom-endpoints) for processing by your [solution](#solution) back end. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++## S ++### SASL/PLAIN ++A protocol that [Advanced Message Queueing Protocol](#advanced-message-queueing-protocol) uses to transfer security tokens. ++[Learn more](https://wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) ++Abbreviation: SASL/PLAIN ++Applies to: Iot Hub ++### Service REST API ++A REST API you can use from the [solution](#solution) back end to manage your [devices](#device). For example, you can use the [Iot Hub](#iot-hub) service API to retrieve and update [device twin](#device-twin) [properties](#properties), invoke [direct methods](#direct-method), and schedule [jobs](#job). Typically, you should use one of the higher-level service SDKs. ++Casing rules: Always *service REST API*. ++Applies to: Iot Hub, IoT Central, Device Provisioning Service, IoT Edge ++### Service operations endpoint ++An [endpoint](#endpoint) that an administrator uses to manage service settings. For example, in the [Device Provisioning Service](#device-provisioning-service) you use the service endpoint to manage [enrollments](#enrollment). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, Device Provisioning Service, IoT Edge, Digital Twins ++### Shared access policy ++A way to define the permissions granted to anyone who has a valid primary or secondary key associated with that policy. You can manage the shared access policies and keys for your hub in the portal. ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Edge, Device Provisioning Service ++### Shared access signature ++A shared access signature is a signed URI that points to one or more resources such as an [IoT hub](#iot-hub) [endpoint](#endpoint). The URI includes a token that indicates how the resources can be accessed by the client. One of the query parameters, the signature, is constructed from the SAS parameters and signed with the key that was used to create the SAS. This signature is used by Azure Storage to authorize access to the storage resource. ++Casing rules: Always lowercase. ++Abbreviation: SAS ++Applies to: Iot Hub, Digital Twins, IoT Central, IoT Edge ++### Simulated device ++For convenience, many of the tutorials and quickstarts run [device](#device) code with simulated sensors on your local development machine. In contrast, a [physical device](#physical-device) such as an MXCHIP has real sensors and connects to an [IoT hub](#iot-hub). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device developer, IoT Edge, Digital Twins, Device Provisioning Service ++### Solution ++In the context of IoT, *solution* typically refers to an IoT solution that includes elements such as [devices](#device), [device apps](#device-app), an [IoT hub](#iot-hub), other Azure services, and [back-end apps](#back-end-app). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Device Provisioning Service, IoT Edge, Digital Twins ++### System properties ++In the context of a [device twin](#device-twin), the read-only [properties](#properties) that include information regarding the [device](#device) usage such as last activity time and connection state. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++## T ++### Tag ++In the context of a [device twin](#device-twin), tags are [device](#device) metadata stored and retrieved by the [solution](#solution) back end in the form of a JSON document. Tags are not visible to apps on a device. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Target condition ++In an [IoT Edge](#iot-edge) deployment, the target condition selects the target [devices](#device) of the deployment. The target condition is continuously evaluated to include any new devices that meet the requirements or remove devices that no longer do. ++Casing rules: Always lowercase. ++Applies to: IoT Edge ++### Telemetry ++The data, such as wind speed or temperature, sent to an [IoT hub](#iot-hub) that was collected by a [device](#device) from its sensors. ++Unlike [properties](#properties), telemetry is not stored on a [digital twin](#digital-twin); it is a stream of time-bound data events that need to be handled as they occur. ++In IoT Plug and Play and [Azure Digital Twins](#azure-digital-twins), telemetry fields defined in an [interface](#interface) represent measurements. These measurements are typically values such as sensor readings that are sent by devices, like [IoT Plug and Play devices](#iot-plug-and-play-device), as a stream of data. ++Casing rules: Always lowercase. ++Example usage: Don't use the word *telemetries*, telemetry refers to the collection of data a device sends. For example: When the device connects to your IoT hub, it starts sending telemetry. One of the telemetry values the device sends is the environmental temperature. +++Applies to: Iot Hub, IoT Central, Digital Twins, IoT Edge, Device developer ++### Telemetry event ++An event in an [IoT hub](#iot-hub) that indicates the arrival of [telemetry](#telemetry) data. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Twin queries ++A feature of [IoT Hub](#iot-hub) that lets you use a SQL-like query language to retrieve information from your [device twins](#device-twin) or [module twins](#module-twin). ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++### Twin synchronization ++The process in [IoT Hub](#iot-hub) that uses the [desired properties](#desired-properties) in your [device twins](#device-twin) or [module twins](#module-twin) to configure your [devices](#device) or [modules](#module) and retrieve [reported properties](#reported-properties) from them to store in the twin. ++Casing rules: Always lowercase. ++Applies to: Iot Hub ++## U ++### Upstream service ++A relative term describing services that feed data into the current context. For instance, in the context of [Azure Digital Twins](#azure-digital-twins), [IoT Hub](#iot-hub) is considered an upstream service because data flows from IoT Hub into Azure [Digital Twins](#digital-twin). ++Casing rules: Always lowercase. ++Applies to: Iot Hub, IoT Central, Digital Twins + |
iot | Iot Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-introduction.md | + + Title: Introduction to the Azure Internet of Things (IoT) +description: Introduction explaining the fundamentals of Azure IoT and the IoT services, including examples that help illustrate the use of IoT. ++++ Last updated : 11/29/2022+++#Customer intent: As a newcomer to IoT, I want to understand what IoT is, what services are available, and examples of business cases so I can figure out where to start. +++# What is Azure Internet of Things (IoT)? ++The Azure Internet of Things (IoT) is a collection of Microsoft-managed cloud services that connect, monitor, and control billions of IoT assets. In simpler terms, an IoT solution is made up of one or more IoT devices that communicate with one or more back-end services hosted in the cloud. ++## IoT devices ++An IoT device is typically made up of a circuit board with sensors attached that use WiFi to connect to the internet. For example: ++* A pressure sensor on a remote oil pump. +* Temperature and humidity sensors in an air-conditioning unit. +* An accelerometer in an elevator. +* Presence sensors in a room. ++There's a wide variety of devices available from different manufacturers to build your solution. For a list of devices certified to work with Azure IoT Hub, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). For prototyping, you can use devices such as an [MXChip IoT DevKit](https://microsoft.github.io/azure-iot-developer-kit/) or a [Raspberry Pi](https://www.raspberrypi.org/). The Devkit has built-in sensors for temperature, pressure, humidity, and a gyroscope, accelerometer, and magnetometer. The Raspberry Pi lets you attach many different types of sensor. ++Microsoft provides open-source [Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) that you can use to build the apps that run on your devices. These [SDKs simplify and accelerate](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/) the development of your IoT solutions. ++## Communication ++Typically, IoT devices send telemetry from the sensors to back-end services in the cloud. However, other types of communication are possible such as a back-end service sending commands to your devices. The following are some examples of device-to-cloud and cloud-to-device communication: ++* A mobile refrigeration truck sends temperature every 5 minutes to an IoT Hub. ++* The back-end service sends a command to a device to change the frequency at which it sends telemetry to help diagnose a problem. ++* A device sends alerts based on the values read from its sensors. For example, a device monitoring a batch reactor in a chemical plant, sends an alert when the temperature exceeds a certain value. ++* Your devices send information to display on a dashboard for viewing by human operators. For example, a control room in a refinery may show the temperature, pressure, and flow volumes in each pipe, enabling operators to monitor the facility. ++The [IoT Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) and IoT Hub support common [communication protocols](../iot-hub/iot-hub-devguide-protocols.md) such as HTTP, MQTT, and AMQP. ++IoT devices have different characteristics when compared to other clients such as browsers and mobile apps. The device SDKs help you address the challenges of connecting devices securely and reliably to your back-end service. Specifically, IoT devices: ++* Are often embedded systems with no human operator. +* Can be deployed in remote locations, where physical access is expensive. +* May only be reachable through the solution back end. +* May have limited power and processing resources. +* May have intermittent, slow, or expensive network connectivity. +* May need to use proprietary, custom, or industry-specific application protocols. ++## Back-end services ++In an IoT solution, the back-end service provides functionality such as: ++* Receiving telemetry at scale from your devices, and determining how to process and store that data. +* Analyzing the telemetry to provide insights, either in real time or after the fact. +* Sending commands from the cloud to a specific device. +* Provisioning devices and controlling which devices can connect to your infrastructure. +* Controlling the state of your devices and monitoring their activities. +* Managing the firmware installed on your devices. ++For example, in a remote monitoring solution for an oil pumping station, the cloud back end uses telemetry from the pumps to identify anomalous behavior. When the back-end service identifies an anomaly, it can automatically send a command back to the device to take a corrective action. This process generates an automated feedback loop between the device and the cloud that greatly increases the solution efficiency. ++## Azure IoT examples ++For real-life examples of how organizations use Azure IoT, see [Microsoft Technical Case Studies for IoT](https://microsoft.github.io/techcasestudies/#technology=IoT&sortBy=featured). ++For an in-depth discussion of IoT architecture, see the [Microsoft Azure IoT Reference Architecture](/azure/architecture/reference-architectures/iot). ++## Next steps ++For some actual business cases and the architecture used, see the [Microsoft Azure IoT Technical Case Studies](https://microsoft.github.io/techcasestudies/#technology=IoT&sortBy=featured). ++For some sample projects that you can try out with an IoT DevKit, see the [IoT DevKit Project Catalog](https://microsoft.github.io/azure-iot-developer-kit/docs/projects/). ++For a more comprehensive explanation of the different services and how they're used, see [Azure IoT services and technologies](iot-services-and-technologies.md). |
iot | Iot Phone App How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-phone-app-how-to.md | + + Title: Use your smartphone as an Azure IoT device +description: A how-to guide that shows you how to turn your smartphone into an IoT device by using the Azure IoT Plug and Play app. ++++ Last updated : 08/24/2022+++++# How to turn your smartphone into an IoT device ++An Azure IoT solution lets you connect your IoT devices to a cloud-based IoT service. Devices send telemetry, such as temperature and humidity and respond to commands such as reboot and change delivery interval. Devices can also synchronize their internal state with the service, sharing properties such as device model and operating system. ++The IoT Plug and Play phone app lets you quickly get started exploring Azure IoT capabilities without the need to configure a dedicated IoT device. ++## Azure IoT Plug and Play app ++To get you started quickly, this article uses a smartphone app as an IoT device. The app sends telemetry collected from the phone's sensors, responds to commands invoked from the service, and reports property values. ++You can use this smartphone app to: ++- Explore a basic IoT scenario. +- Manage and interact with your phone remotely. +- Test your configuration. +- As a starting point for your custom device development. ++## Install the app +++## App features ++### Connect ++You can connect to an IoT Central application by scanning a QR code in IoT Central. ++To learn more, see [Connect the app](#connect-the-app) later in this guide. ++### Telemetry ++The app collects data from sensors on the phone to send as telemetry to the IoT service you're using. Sensor data is aggregated every five seconds by default, but you can change this on the app settings page: +++The following screenshot shows a device view in IoT Central that displays some of the device telemetry: +++### Properties ++The app reports device status, such as device model and manufacturer. There's also an editable property that you can modify and see the change synchronize in your AzureIoT solution: +++The following screenshot shows the writable property in IoT Central after the property was sent to the device: +++### Image upload ++Both IoT Central and IoT Hub enable file upload to Azure storage from a device. The smartphone app lets you upload an image from the device. ++To learn more about configuring your service to support file uploads from a device, see: ++- [Upload files from your device to the cloud with IoT Hub](../iot-hub/iot-hub-csharp-csharp-file-upload.md). +- [Upload files from your device to the cloud with IoT Central](../iot-central/core/howto-configure-file-uploads.md). +++### Logs ++The smartphone app writes events to a local log file that you can view from within the app. Use the log file to troubleshoot and better understand what the app is doing: +++### Settings ++The settings page in the app lets you: ++- Connect the app to your Azure IoT solution. +- Review the current device registration information. +- Reset the app by clearing the stored data. +- Customize the app appearance. +- Set the frequency that the app sends telemetry to your IoT service. +++## Connect the app ++### Prerequisites ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++<!-- To do: does this need an app template? --> +Create an IoT Central application. To learn more, see [Create an IoT Central application](../iot-central/core/howto-create-iot-central-application.md). ++### Register a device ++Before you connect the phone app, you need to register a device in your IoT Central application. When you create a device registration, IoT Central generates the device connection information. ++To register the device in IoT Central: ++1. Sign in to your IoT Central application and navigate to the **Devices** page. ++1. Select **Create a device**. ++1. On the **Create a new device** page, select **Create**: ++ :::image type="content" source="media/iot-phone-app-how-to/iot-central-create-device.png" alt-text="Screenshot showing how to create a device in IoT Central."::: ++1. On the list of devices, click on the device name and then select **Connect**. On the **Device connection** page you can see the QR code that you'll scan in the smartphone app: ++ :::image type="content" source="media/iot-phone-app-how-to/device-connection-qr-code.png" alt-text="Screenshot showing the device connection page with the QR code."::: ++### Connect the device ++After you register the device in IoT Central, you can connect the smartphone app by scanning the QR code. To connect the app: ++1. Open the **IoT PnP** app on your smartphone. ++1. On the welcome page, select **Scan QR code**. Point the phone's camera at the QR code. Then wait for a few seconds while the connection is established. ++1. On the telemetry page in the app, you can see the data the app is sending to IoT Central. On the logs page, you can see the device connecting and several initialization messages. ++1. On the **Settings > Registration** page, you can see the device ID and ID scope that the app used to connect to IoT Central. ++To learn more about how devices connect to IoT Central, see [How devices connect](../iot-central/core/overview-iot-central-developer.md). ++### Verify the connection ++To view the data the device is sending in your IoT Central application: ++1. Sign in to your IoT Central application and navigate to the **Devices** page. Your device has been automatically assigned to the **Smartphone** device template. ++ > [!TIP] + > You may need to refresh the page in your web browser to see when the device is assigned to the the **Smartphone** device template. ++1. On the list of devices, click on the device name and then select **Overview**. The **Overview** page shows the telemetry from the smartphone sensors: ++ :::image type="content" source="media/iot-phone-app-how-to/smartphone-overview.png" alt-text="Screenshot of the device overview page in IoT Central that shows the telemetry from the smartphone sensors."::: ++1. View the **About** page to see the properties sent by the device. ++1. On the **Commands** page, run the **LightOn** command to turn on the phone's flashlight. ++> [!TIP] +> The **Raw data** page shows all the data coming from the device. ++## Next steps ++Now that you've connected your smartphone app to IoT Central, a suggested next step is to learn more about [IoT Central](../iot-central/core/overview-iot-central.md). |
iot | Iot Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-sdks.md | + + Title: Azure IoT device and service SDKs ++description: A list of the IoT SDKs and libraries. Includes SDKs for device development and SDKs for building service applications. +++++ Last updated : 02/20/2023++++# Azure IoT SDKs ++The following tables list the various SDKs you can use to build IoT solutions. ++## Device SDKs +++Use the device SDKs to develop code to run on IoT devices that connect to IoT Hub or IoT Central. ++To learn more about how to use the device SDKs, see [What is Azure IoT device and application development?](../iot-develop/about-iot-develop.md). ++## Embedded device SDKs +++Use the embedded device SDKs to develop code to run on IoT devices that connect to IoT Hub or IoT Central. ++To learn more about when to use the embedded device SDKs, see [C SDK and Embedded C SDK usage scenarios](../iot-develop/concepts-using-c-sdk-and-embedded-c-sdk.md). ++## Service SDKs +++To learn more about using the service SDKs to interact with devices through an IoT hub, see [IoT Plug and Play service developer guide](../iot-develop/concepts-developer-guide-service.md). ++## Management SDKs +++Alternatives to the management SDKs include the [Azure CLI](../iot-hub/iot-hub-create-using-cli.md), [PowerShell](../iot-hub/iot-hub-create-using-powershell.md), and [REST API](../iot-hub/iot-hub-rm-rest.md). ++## Next steps ++Suggested next steps include: ++- [Device developer guide](../iot-develop/concepts-developer-guide-device.md) +- [Service developer guide](../iot-develop/concepts-developer-guide-service.md) |
iot | Iot Security Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-security-architecture.md | + + Title: Security architecture ++description: Security architecture guidelines and considerations for Azure IoT solutions illustrated using the IoT reference architecture ++++ Last updated : 02/10/2023++++# Security architecture for IoT solutions ++When you design and architect an IoT solution, it's important to understand the potential threats and include appropriate defenses. Understanding how an attacker might compromise a system helps you to make sure that the appropriate mitigations are in place from the start. ++## Threat modeling ++Microsoft recommends using a threat modeling process as part of your IoT solution design. If you're not familiar with threat modeling and the secure development lifecycle, see: ++- [Threat modeling](https://www.microsoft.com/securityengineering/sdl/threatmodeling) +- [Secure development best practices on Azure](../security/develop/secure-dev-overview.md) +- [Getting started guide](../security/develop/threat-modeling-tool-getting-started.md) ++## Security in IoT ++It's helpful to divide your IoT architecture into several zones as part of the threat modeling exercise: ++- Device +- Field gateway +- Cloud gateway +- Service ++Each zone often has its own data and authentication and authorization requirements. You can also use zones to isolate damage and restrict the impact of low trust zones on higher trust zones. ++Each zone is separated by a _trust boundary_, shown as the dotted red line in the following diagram. It represents a transition of data from one source to another. During this transition, the data could be subject to the following threats: ++- Spoofing +- Tampering +- Repudiation +- Information disclosure +- Denial of service +- Elevation of privilege ++To learn more, see the [STRIDE model](../security/develop/threat-modeling-tool-threats.md#stride-model). +++You can use STRIDE to model the threats to each component within each zone. The following sections elaborate on each of the components and specific security concerns and solutions that should be put into place. ++The remainder of this article discusses the threats and mitigations for these zones and components in more detail. ++## Device zone ++The device environment is the space around the device where physical access and local network digital access to the device is feasible. A local network is assumed to be distinct and insulated from ΓÇô but potentially bridged to ΓÇô the public internet. The device environment includes any short-range wireless radio technology that permits peer-to-peer communication of devices. It doesn't include any network virtualization technology creating the illusion of such a local network. It doesn't include public operator networks that require any two devices to communicate across public network space if they were to enter a peer-to-peer communication relationship. ++## Field gateway zone ++A field gateway is a device, appliance, or general-purpose server computer software that acts as communication enabler and, potentially, as a device control system and device data processing hub. The field gateway zone includes the field gateway itself and all the devices attached to it. Field gateways act outside dedicated data processing facilities, are usually location bound, are potentially subject to physical intrusion, and have limited operational redundancy. A field gateway is typically a thing that an attacker could physically sabotage if they gained physical access. ++A field gateway differs from a traffic router in that it has had an active role in managing access and information flow. The field gateway has two distinct surface areas. One faces the devices attached to it and represents the inside of the zone. The other faces all external parties and is the edge of the zone. ++## Cloud gateway zone ++A cloud gateway is a system that enables remote communication from and to devices or field gateways deployed in multiple sites. The cloud gateway typically enables a cloud-based control and data analysis system, or a federation of such systems. In some cases, a cloud gateway may immediately facilitate access to special-purpose devices from terminals such as tablets or phones. In the cloud gateway zone, operational measures prevent targeted physical access and aren't necessarily exposed to a public cloud infrastructure. ++A cloud gateway may be mapped into a network virtualization overlay to insulate the cloud gateway and all of its attached devices or field gateways from any other network traffic. The cloud gateway itself isn't a device control system or a processing or storage facility for device data; those facilities interface with the cloud gateway. The cloud gateway zone includes the cloud gateway itself along with all field gateways and devices directly or indirectly attached to it. The edge of the zone is a distinct surface area that all external parties communicate through. ++## Services zone ++A service in this context is any software component or module that interfaces with devices through a field or cloud gateway. A service can collect data from the devices and command and control those devices. A service is a mediator that acts under its identity towards gateways and other subsystems to: ++- Store and analyze data +- Issue commands to devices based on data insights or schedules +- Expose information and control capabilities to authorized end users ++## IoT devices ++IoT devices are often special-purpose devices that range from simple temperature sensors to complex factory production lines with thousands of components inside them. Example IoT device capabilities include: ++- Measuring and reporting environmental conditions +- Turning valves +- Controlling servos +- Sounding alarms +- Switching lights on or off ++The purpose of these devices dictates their technical design and the available budget for their production and scheduled lifetime operation. The combination of these factors constrains the available operational energy budget, physical footprint, and available storage, compute, and security capabilities. ++Things that can go wrong with an automated or remotely controlled IoT device include: ++- Physical defects +- Control logic defects +- Willful unauthorized intrusion and manipulation. ++The consequences of these failures could be severe such as destroyed production lots, buildings burnt down, or injury and death. Therefore, there's a high security bar for devices that make things move or that report sensor data that results in commands that cause things to move. ++### Device control and device data interactions ++Connected special-purpose devices have a significant number of potential interaction surface areas and interaction patterns, all of which must be considered to provide a framework for securing digital access to those devices. _Digital access_ refers to operations that are carried out through software and hardware rather than through direct physical access to the device. For example, physical access could be controlled by putting the device into a room with a lock on the door. While physical access can't be denied using software and hardware, measures can be taken to prevent physical access from leading to system interference. ++As you explore the interaction patterns, look at _device control_ and _device data_ with the same level of attention. Device control refers to any information provided to a device with the intention of modifying its behavior. Device data refers to information that a device emits to any other party about its state and the observed state of its environment. ++## Threat modeling for the Azure IoT reference architecture ++This section uses the [Azure IoT reference architecture](/azure/architecture/reference-architectures/iot) to demonstrate how to think about threat modeling for IoT and how to address the threats identified: +++The following diagram provides a simplified view of the reference architecture by using a data flow diagram model: +++The architecture separates the device and field gateway capabilities. This approach enables you to use more secure field gateway devices. Field gateway devices can communicate with the cloud gateway using secure protocols, which typically require greater processing power than a simple device, such as a thermostat, could provide on its own. In the **Azure Services Zone** in the diagram, the Azure IoT Hub service is the cloud gateway. ++Based on the architecture outlined previously, the following sections show some threat modeling examples. The examples focus on the core elements of a threat model: ++- Processes +- Communication +- Storage ++### Processes ++Here are some examples of threats in the processes category. The threats are categorized based on the STRIDE model: ++**Spoofing**: An attacker may extract cryptographic keys from a device, either at the software or hardware level. The attacked then uses these keys to access the system from a different physical or virtual device by using the identity of the original device. ++**Denial of Service**: A device can be rendered incapable of functioning or communicating by interfering with radio frequencies or cutting wires. For example, a surveillance camera that had its power or network connection intentionally knocked out can't report data, at all. ++**Tampering**: An attacker may partially or wholly replace the software on the device. If the device's cryptographic keys are available to the attackers code, it can then use the identity of the device. ++**Tampering**: A surveillance camera that's showing a visible-spectrum picture of an empty hallway could be aimed at a photograph of such a hallway. A smoke or fire sensor could be reporting someone holding a lighter under it. In either case, the device may be technically fully trustworthy towards the system, but it reports manipulated information. ++**Tampering**: An attacker may use extracted cryptographic keys to intercept and suppress data sent from the device and replace it with false data that's authenticated with the stolen keys. ++**Information Disclosure**: If the device is running manipulated software, such manipulated software could potentially leak data to unauthorized parties. ++**Information Disclosure**: An attacker may use extracted cryptographic keys to inject code into the communication path between the device and field gateway or cloud gateway to siphon off information. ++**Denial of Service**: The device can be turned off or turned into a mode where communication isn't possible (which is intentional in many industrial machines). ++**Tampering**: The device can be reconfigured to operate in a state unknown to the control system (outside of known calibration parameters) and thus provide data that can be misinterpreted ++**Elevation of Privilege**: A device that does specific function can be forced to do something else. For example, a valve that is programmed to open half way can be tricked to open all the way. ++**Spoofing/Tampering/Repudiation**: If not secured (which is rarely the case with consumer remote controls), an attacker can manipulate the state of a device anonymously. A good illustration is a remote control that can turn off any TV. ++The following table shows example mitigations to these threats. The values in the threat column are abbreviations: ++- Spoofing (S) +- Tampering (T) +- Repudiation (R) +- Information disclosure (I) +- Denial of service (D) +- Elevation of privilege (E) ++| Component | Threat | Mitigation | Risk | Implementation | +| | | | | | +| Device |S |Assigning identity to the device and authenticating the device |Replacing device or part of the device with some other device. How do you know you're talking to the right device? |Authenticating the device, using Transport Layer Security (TLS) or IPSec. Infrastructure should support using pre-shared key (PSK) on those devices that can't handle full asymmetric cryptography. Use Azure AD, [OAuth](https://www.rfc-editor.org/pdfrfc/rfc6755.txt.pdf) | +|| TRID |Apply tamperproof mechanisms to the device, for example, by making it hard to impossible to extract keys and other cryptographic material from the device. |The risk is if someone is tampering the device (physical interference). How are you sure, that device hasn't been tampered with. |The most effective mitigation is a trusted platform module (TPM). A TPM stores keys in special on-chip circuitry from which the keys can't be read, but can only be used for cryptographic operations that use the key. Memory encryption of the device. Key management for the device. Signing the code. | +|| E |Having access control of the device. Authorization scheme. |If the device allows for individual actions to be performed based on commands from an outside source, or even compromised sensors, it allows the attack to perform operations not otherwise accessible. |Having authorization scheme for the device | +| Field Gateway |S |Authenticating the Field gateway to Cloud Gateway (such as cert based, PSK, or Claim based.) |If someone can spoof Field Gateway, then it can present itself as any device. |TLS RSA/PSK, IPSec, [RFC 4279](https://tools.ietf.org/html/rfc4279). All the same key storage and attestation concerns of devices in general ΓÇô best case is use TPM. 6LowPAN extension for IPSec to support Wireless Sensor Networks (WSN). | +|| TRID |Protect the Field Gateway against tampering (TPM) |Spoofing attacks that trick the cloud gateway thinking it's talking to field gateway could result in information disclosure and data tampering |Memory encryption, TPMs, authentication. | +|| E |Access control mechanism for Field Gateway | | | ++### Communication ++Here are some examples of threats in the communication category. The threats are categorized based on the STRIDE model: ++**Denial of Service**: Constrained devices are generally under DoS threat when they actively listen for inbound connections or unsolicited datagrams on a network. An attacker can open many connections in parallel and either not service them or service them slowly, or flood the device with unsolicited traffic. In both cases, the device can effectively be rendered inoperable on the network. ++**Spoofing, Information Disclosure**: Constrained devices and special-purpose devices often have one-for-all security facilities such as password or PIN protection. Sometimes they wholly rely on trusting the network, and grant access to information to any device is on the same network. If the network is protected by a shared key that gets disclosed, an attacker could control the device or observe the data it transmits. ++**Spoofing**: an attacker may intercept or partially override the broadcast and spoof the originator. ++**Tampering**: An attacker may intercept or partially override the broadcast and send false information. ++**Information Disclosure:** An attacker may eavesdrop on a broadcast and obtain information without authorization. ++**Denial of Service:** An attacker may jam the broadcast signal and deny information distribution. ++The following table shows example mitigations to these threats: ++| Component | Threat | Mitigation | Risk | Implementation | +| | | | | | +| Device IoT Hub |TID |(D)TLS (PSK/RSA) to encrypt the traffic |Eavesdropping or interfering the communication between the device and the gateway |Security on the protocol level. With custom protocols, you need to figure out how to protect them. In most cases, the communication takes place from the device to the IoT Hub (device initiates the connection). | +| Device to Device |TID |(D)TLS (PSK/RSA) to encrypt the traffic. |Reading data in transit between devices. Tampering with the data. Overloading the device with new connections |Security on the protocol level (MQTT/AMQP/HTTP/CoAP. With custom protocols, you need to figure out how to protect them. The mitigation for the DoS threat is to peer devices through a cloud or field gateway and have them only act as clients towards the network. The peering may result in a direct connection between the peers after having been brokered by the gateway | +| External Entity Device |TID |Strong pairing of the external entity to the device |Eavesdropping the connection to the device. Interfering the communication with the device |Securely pairing the external entity to the device NFC/Bluetooth LE. Controlling the operational panel of the device (Physical) | +| Field Gateway Cloud Gateway |TID |TLS (PSK/RSA) to encrypt the traffic. |Eavesdropping or interfering the communication between the device and the gateway |Security on the protocol level (MQTT/AMQP/HTTP/CoAP). With custom protocols, you need to figure out how to protect them. | +| Device Cloud Gateway |TID |TLS (PSK/RSA) to encrypt the traffic. |Eavesdropping or interfering the communication between the device and the gateway |Security on the protocol level (MQTT/AMQP/HTTP/CoAP). With custom protocols, you need to figure out how to protect them. | ++### Storage ++The following table shows example mitigations to the storage threats: ++| Component | Threat | Mitigation | Risk | Implementation | +| | | | | | +| Device storage |TRID |Storage encryption, signing the logs |Reading data from the storage, tampering with telemetry data. Tampering with queued or cached command control data. Tampering with configuration or firmware update packages while cached or queued locally can lead to OS and/or system components being compromised |Encryption, message authentication code (MAC), or digital signature. Where possible, strong access control through resource access control lists (ACLs) or permissions. | +| Device OS image |TRID | |Tampering with OS /replacing the OS components |Read-only OS partition, signed OS image, encryption | +| Field Gateway storage (queuing the data) |TRID |Storage encryption, signing the logs |Reading data from the storage, tampering with telemetry data, tampering with queued or cached command control data. Tampering with configuration or firmware update packages (destined for devices or field gateway) while cached or queued locally can lead to OS and/or system components being compromised |BitLocker | +| Field Gateway OS image |TRID | |Tampering with OS /replacing the OS components |Read-only OS partition, signed OS image, Encryption | ++## See also ++Read about IoT Hub security in [Control access to IoT Hub](../iot-hub/iot-hub-devguide-security.md) in the IoT Hub developer guide. |
iot | Iot Security Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-security-best-practices.md | + + Title: Security best practices ++description: Security best practices for building, deploying, and operating your IoT solution. Includes recommendations for devices, data, and infrastructure ++++ Last updated : 02/10/2023++++# Security best practices for IoT solutions ++You can divide security in an IoT solution into the following three areas: ++- **Device security**: Securing the IoT device while it's deployed in the wild. ++- **Connection security**: Ensuring all data transmitted between the IoT device and IoT Hub is confidential and tamper-proof. ++- **Cloud security**: Providing a means to secure data while it moves through, and is stored in the cloud. ++Implementing the recommendations in this article will help you meet the security obligations described in the shared responsibility model. To learn more about what Microsoft does to fulfill service provider responsibilities, see [Shared responsibilities for cloud computing](../security/fundamentals/shared-responsibility.md). ++## Responsibilities ++You can develop and execute an IoT security strategy with the active participation of the various players involved in the manufacturing, development, and deployment of IoT devices and infrastructure. The following list is a high-level description of these players. ++- **Hardware manufacturer/integrator**: The manufacturers of IoT hardware you're deploying, the integrators assembling hardware from various manufacturers, or the suppliers providing the hardware. ++- **Solution developer**: The solution developer may part of an in-house team or a system integrator specializing in this activity. The IoT solution developer can develop various components of the IoT solution from scratch, or integrate various off-the-shelf or open-source components. ++- **Solution deployer**: After an IoT solution is developed, it needs to be deployed in the field. This process involves deployment of hardware, interconnection of devices, and deployment of solutions in hardware devices or the cloud. ++- **Solution operator**: After the IoT solution is deployed, it requires long-term operations, monitoring, upgrades, and maintenance. These tasks can be done by an in-house team that monitors the correct behavior of overall IoT infrastructure. ++## Microsoft Defender for IoT ++Microsoft Defender for IoT can automatically monitor some of the recommendations included in this article. Microsoft Defender for IoT should be the first line of defense to protect your resources in Azure. Microsoft Defender for IoT periodically analyzes the security state of your Azure resources to identify potential security vulnerabilities. It then provides you with recommendations on how to address them. ++- To learn more about Microsoft Defender for IoT recommendations, see [Security recommendations in Microsoft Defender for IoT](../security-center/security-center-recommendations.md). +- To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT?](../security-center/security-center-introduction.md). ++## Device security ++- **Scope hardware to minimum requirements**: Select your device hardware to include the minimum features required for its operation, and nothing more. For example, only include USB ports if they're necessary for the operation of the device in your solution. Extra features can expose the device to unwanted attack vectors. ++- **Select tamper proof hardware**: Select device hardware with built-in mechanisms to detect physical tampering, such as the opening of the device cover or the removal of a part of the device. These tamper signals can be part of the data stream uploaded to the cloud, which can alert operators to these events. ++- **Select secure hardware**: If possible choose device hardware that includes security features such as secure and encrypted storage and boot functionality based on a Trusted Platform Module. These features make devices more secure and help protect the overall IoT infrastructure. ++- **Enable secure upgrades**: Firmware upgrades during the lifetime of the device are inevitable. Build devices with secure paths for upgrades and cryptographic assurance of firmware versions to secure your devices during and after upgrades. ++- **Follow a secure software development methodology**: The development of secure software requires you to consider security from the inception of the project all the way through implementation, testing, and deployment. The [Microsoft Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) provides a step-by-step approach to building secure software. ++- **Use device SDKs whenever possible**: Device SDKs implement various security features such as encryption and authentication that help you develop robust and secure device applications. To learn more, see [Understand and use Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md). ++- **Choose open-source software with care**: Open-source software provides an opportunity to quickly develop solutions. When you're choosing open-source software, consider the activity level of the community for each open-source component. An active community ensures that software is supported and that issues are discovered and addressed. An obscure and inactive open-source software project might not be supported and issues aren't likely be discovered. ++- **Deploy hardware securely**: IoT deployments may require you to deploy hardware in unsecure locations, such as in public spaces or unsupervised locales. In such situations, ensure that hardware deployment is as tamper-proof as possible. For example, if the hardware has USB ports ensure that they're covered securely. ++- **Keep authentication keys safe**: During deployment, each device requires device IDs and associated authentication keys generated by the cloud service. Keep these keys physically safe even after the deployment. Any compromised key can be used by a malicious device to masquerade as an existing device. ++- **Keep the system up-to-date**: Ensure that device operating systems and all device drivers are upgraded to the latest versions. Keeping operating systems up-to-date helps ensure that they're protected against malicious attacks. ++- **Protect against malicious activity**: If the operating system permits, install the latest antivirus and antimalware capabilities on each device operating system. ++- **Audit frequently**: Auditing IoT infrastructure for security-related issues is key when responding to security incidents. Most operating systems provide built-in event logging that you should review frequently to make sure no security breach has occurred. A device can send audit information as a separate telemetry stream to the cloud service where it can be analyzed. ++- **Follow device manufacturer security and deployment best practices**: If the device manufacturer provides security and deployment guidance, follow that guidance in addition to the generic guidance listed in this article. ++- **Use a field gateway to provide security services for legacy or constrained devices**: Legacy and constrained devices might lack the capability to encrypt data, connect with the Internet, or provide advanced auditing. In these cases, a modern and secure field gateway can aggregate data from legacy devices and provide the security required for connecting these devices over the Internet. Field gateways can provide secure authentication, negotiation of encrypted sessions, receipt of commands from the cloud, and many other security features. ++## Connection security ++- **Use X.509 certificates to authenticate your devices to IoT Hub**: IoT Hub supports both X509 certificate-based authentication and security tokens as methods for a device to authenticate with your IoT hub. If possible, use X509-based authentication in production environments as it provides greater security. To learn more, see [Authenticating a device to IoT Hub](../iot-hub/iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub). ++- **Use Transport Layer Security (TLS) 1.2 to secure connections from devices**: IoT Hub uses TLS to secure connections from IoT devices and services. Three versions of the TLS protocol are currently supported: 1.0, 1.1, and 1.2. TLS 1.0 and 1.1 are considered legacy. To learn more, see [Transport Layer Security (TLS) support in IoT Hub](../iot-hub/iot-hub-tls-support.md). ++- **Ensure you have a way to update the TLS root certificate on your devices**: TLS root certificates are long-lived, but they still may expire or be revoked. If there's no way of updating the certificate on the device, the device may not be able to connect to IoT Hub or any other cloud service at a later date. ++- **Consider using Azure Private Link**: Azure Private Link lets you connect your devices to a private endpoint on your VNet, enabling you to block access to your IoT hub's public device-facing endpoints. To learn more, see [Ingress connectivity to IoT Hub using Azure Private Link](../iot-hub/virtual-network-support.md#ingress-connectivity-to-iot-hub-using-azure-private-link). ++## Cloud security ++- **Follow a secure software development methodology**: The development of secure software requires you to consider security from the inception of the project all the way through implementation, testing, and deployment. The [Microsoft Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) provides a step-by-step approach to building secure software. ++- **Choose open-source software with care**: Open-source software provides an opportunity to quickly develop solutions. When you're choosing open-source software, consider the activity level of the community for each open-source component. An active community ensures that software is supported and that issues are discovered and addressed. An obscure and inactive open-source software project might not be supported and issues aren't likely be discovered. ++- **Integrate with care**: Many software security flaws exist at the boundary of libraries and APIs. Functionality that may not be required for the current deployment might still be available by through an API layer. To ensure overall security, make sure to check all interfaces of components being integrated for security flaws. ++- **Protect cloud credentials**: An attacker can use the cloud authentication credentials you use to configure and operate your IoT deployment to gain access to and compromise your IoT system. Protect the credentials by changing the password frequently, and don't use these credentials on public machines. ++- **Define access controls for your IoT hub**: Understand and define the type of access that each component in your IoT Hub solution needs based on the required functionality. There are two ways you can grant permissions for the service APIs to connect to your IoT hub: [Azure Active Directory](../iot-hub/iot-hub-dev-guide-azure-ad-rbac.md) or [Shared Access signatures](../iot-hub/iot-hub-dev-guide-sas.md). ++- **Define access controls for backend services**: Other Azure services can consume the data your IoT Hub ingests from your devices by using the IoT hub's Event Hubs-compatible endpoint. You can also use IoT Hub message routing to deliver the data from your devices to other Azure services. Understand and configure appropriate access permissions for IoT Hub to connect to these services. To learn more, see [Read device-to-cloud messages from the built-in endpoint](../iot-hub/iot-hub-devguide-messages-read-builtin.md) and [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](../iot-hub/iot-hub-devguide-messages-d2c.md). ++- **Monitor your IoT solution from the cloud**: Monitor the overall health of your IoT Hub solution using the [metrics in Azure Monitor](../iot-hub/monitor-iot-hub.md). ++- **Set up diagnostics**: Monitor your operations by logging events in your solution, and then sending the diagnostic logs to Azure Monitor. To learn more, see [Monitor and diagnose problems in your IoT hub](../iot-hub/monitor-iot-hub.md). ++## Next steps ++Read about IoT Hub security in [Azure security baseline for Azure IoT Hub](/security/benchmark/azure/baselines/iot-hub-security-baseline?toc=/azure/iot-hub/TOC.json) and [Security in your IoT workload](/azure/architecture/framework/iot/iot-security). |
iot | Iot Services And Technologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-services-and-technologies.md | + + Title: Azure Internet of Things (IoT) technologies and solutions +description: Describes the collection of technologies and services you can use to build an Azure IoT solution. ++++ Last updated : 11/29/2022++++# What Azure technologies and services can you use to create IoT solutions? ++Azure IoT technologies and services provide you with options to create a wide variety of IoT solutions that enable digital transformation for your organization. For example, you can: ++* Use [Azure IoT Central](https://apps.azureiotcentral.com), a managed IoT application platform, to evaluate your IoT solution. +* Use Azure IoT platform services such as [Azure IoT Hub](../iot-hub/about-iot-hub.md) and the [Azure IoT device SDKs](../iot-hub/iot-hub-devguide-sdks.md) to build a custom IoT solution from scratch. ++ ++## Azure IoT Central ++[IoT Central](https://apps.azureiotcentral.com) is an IoT application platform as a service (aPaaS) that reduces the burden and cost of developing, managing, and maintaining IoT solutions. Use IoT Central to quickly evaluate your IoT scenario and assess the opportunities it can create for your business. IoT Central streamlines the development of a complex and continually evolving IoT infrastructure by letting you to focus on determining the business impact you can create with your IoT data. ++The web UI lets you quickly connect devices, monitor device conditions, create rules, and manage devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications. Once you've used IoT Central to evaluate your IoT scenario, you can then build your enterprise ready solutions by using the power of Azure IoT platform. ++Choose devices from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com) to quickly connect to your solution. Use the IoT Central web UI to monitor and manage your devices to keep them healthy and connected. Use connectors and APIs to integrate your IoT Central application with other business applications. ++As a fully managed application platform, IoT Central has a simple, predictable pricing model. ++## Custom solutions ++To build an IoT solution from scratch, use one or more of the following Azure IoT technologies and ++### Devices ++Develop your IoT devices using one of the [Azure IoT Starter Kits](/samples/azure-samples/azure-iot-starter-kits/azure-iot-starter-kits/) or choose a device to use from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python). ++You can further simplify how you create the embedded code for your devices by following the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to generate your embedded device code and configure a cloud-based solution such as an IoT Central application. ++[Azure IoT Edge](../iot-edge/about-iot-edge.md) lets you offload parts of your IoT workload from your Azure cloud services to your devices. IoT Edge can reduce latency in your solution, reduce the amount of data your devices exchange with the cloud, and enable off-line scenarios. You can manage IoT Edge devices from IoT Central. ++[Azure Sphere](/azure-sphere/product-overview/what-is-azure-sphere) is a secured, high-level application platform with built-in communication and security features for internet-connected devices. It includes a secured microcontroller unit, a custom Linux-based operating system, and a cloud-based security service that provides continuous, renewable security. ++### Cloud connectivity ++The [Azure IoT Hub](../iot-hub/about-iot-hub.md) service enables reliable and secure bidirectional communications between millions of IoT devices and a cloud-based solution. [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) is a helper service for IoT Hub. The service provides zero-touch, just-in-time provisioning of devices to the right IoT hub without requiring human intervention. The |