Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Custom Policies Series Call Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-call-rest-api.md | You need to deploy an app, which will serve as your external app. Your custom po "code" : "errorCode", "requestId": "requestId", "userMessage" : "The access code you entered is incorrect. Please try again.",- "developerMessage" : `The The provided code ${req.body.accessCode} does not match the expected code for user.`, + "developerMessage" : `The provided code ${req.body.accessCode} does not match the expected code for user.`, "moreInfo" :"https://docs.microsoft.com/en-us/azure/active-directory-b2c/string-transformations" }; res.status(409).send(errorResponse); You need to deploy an app, which will serve as your external app. Your custom po "code": "errorCode", "requestId": "requestId", "userMessage": "The access code you entered is incorrect. Please try again.",- "developerMessage": "The The provided code 54321 does not match the expected code for user.", + "developerMessage": "The provided code 54321 does not match the expected code for user.", "moreInfo": "https://docs.microsoft.com/en-us/azure/active-directory-b2c/string-transformations" } ``` |
active-directory | Plan Auto User Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md | This article uses the following terms: * Single sign-on (SSO) - The ability for a user to sign-on once and access all SSO enabled applications. In the context of user provisioning, SSO is a result of users having a single account to access all systems that use automatic user provisioning. -* Source system - The repository of users that the Azure AD provisions from. Azure AD is the source system for most pre-integrated provisioning connectors. However, there are some exceptions for cloud applications such as SAP, Workday, and AWS. For example, see [User provisioning from Workday to AD](../saas-apps/workday-inbound-tutorial.md). +* Source system - The repository of users that the Azure AD provisions from. Azure AD is the source system for most preintegrated provisioning connectors. However, there are some exceptions for cloud applications such as SAP, Workday, and AWS. For example, see [User provisioning from Workday to AD](../saas-apps/workday-inbound-tutorial.md). * Target system - The repository of users that the Azure AD provisions to. The Target system is typically a SaaS application such as ServiceNow, Zscaler, and Slack. The target system can also be an on-premises system such as AD. Use the Azure portal to view and manage all the applications that support provis ### Determine the type of connector to use -The actual steps required to enable and configure automatic provisioning vary depending on the application. If the application you wish to automatically provision is listed in the [Azure AD SaaS app gallery](../saas-apps/tutorial-list.md), then you should select the [app-specific integration tutorial](../saas-apps/tutorial-list.md) to configure its pre-integrated user provisioning connector. +The actual steps required to enable and configure automatic provisioning vary depending on the application. If the application you wish to automatically provision is listed in the [Azure AD SaaS app gallery](../saas-apps/tutorial-list.md), then you should select the [app-specific integration tutorial](../saas-apps/tutorial-list.md) to configure its preintegrated user provisioning connector. If not, follow the steps: -1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a pre-integrated user provisioning connector. Our team works with you and the application developer to onboard your application to our platform if it supports SCIM. +1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a preintegrated user provisioning connector. Our team works with you and the application developer to onboard your application to our platform if it supports SCIM. -1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. Using SCIM is a requirement for Azure AD to provision users to the app without a pre-integrated provisioning connector. +1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. Using SCIM is a requirement for Azure AD to provision users to the app without a preintegrated provisioning connector. 1. If the application is able to utilize the BYOA SCIM connector, then refer to [BYOA SCIM integration tutorial](../app-provisioning/use-scim-to-provision-users-and-groups.md) to configure the BYOA SCIM connector for the application. Before implementing automatic user provisioning, you must determine the users an ### Define user and group attribute mapping -To implement automatic user provisioning, you need to define the user and group attributes that are needed for the application. There's a pre-configured set of attributes and [attribute-mappings](../app-provisioning/configure-automatic-user-provisioning-portal.md) between Azure AD user objects, and each SaaS applicationΓÇÖs user objects. Not all SaaS apps enable group attributes. +To implement automatic user provisioning, you need to define the user and group attributes that are needed for the application. There's a preconfigured set of attributes and [attribute-mappings](../app-provisioning/configure-automatic-user-provisioning-portal.md) between Azure AD user objects, and each SaaS applicationΓÇÖs user objects. Not all SaaS apps enable group attributes. Azure AD supports by direct attribute-to-attribute mapping, providing constant values, or [writing expressions for attribute mappings](../app-provisioning/functions-for-customizing-application-data.md). This flexibility gives you fine control over what is populated in the targeted system's attribute. You can use [Microsoft Graph API](../app-provisioning/export-import-provisioning-configuration.md) and Graph Explorer to export your user provisioning attribute mappings and schema to a JSON file and import it back into Azure AD. First, configure automatic user provisioning for the application. Then run test | Scenarios| Expected results | | - | - |-| User is added to a group assigned to the target system | User object is provisioned in target system. <br>User can sign-in to target system and perform the desired actions. | -| User is removed from a group that is assigned to target system | User object is deprovisioned in the target system.<br>User can't sign-in to target system. | -| User information is updated in Azure AD by any method | Updated user attributes are reflected in target system after an incremental cycle | -| User is out of scope | User object is disabled or deleted. <br>Note: This behavior is overridden for [Workday provisioning](skip-out-of-scope-deletions.md). | +| User is added to a group assigned to the target system. | User object is provisioned in target system. <br>User can sign-in to target system and perform the desired actions. | +| User is removed from a group that is assigned to target system. | User object is deprovisioned in the target system.<br>User can't sign-in to target system. | +| User information updates in Azure AD by any method. | Updated user attributes reflect in the target system after an incremental cycle. | +| User is out of scope. | User object is disabled or deleted. <br>Note: This behavior is overridden for [Workday provisioning](skip-out-of-scope-deletions.md). | ### Plan security The provisioning service stores the state of both systems after the initial cycl ### Configure automatic user provisioning -Use the [Azure portal](https://portal.azure.com/) to manage automatic user account provisioning and de-provisioning for applications that support it. Follow the steps in [How do I set up automatic provisioning to an application?](../app-provisioning/user-provisioning.md) +Use the [Azure portal](https://portal.azure.com/) to manage automatic user account provisioning and deprovisioning for applications that support it. Follow the steps in [How do I set up automatic provisioning to an application?](../app-provisioning/user-provisioning.md) The Azure AD user provisioning service can also be configured and managed using the [Microsoft Graph API](/graph/api/resources/synchronization-overview). After a successful [initial cycle](../app-provisioning/user-provisioning.md), th * The service is manually stopped, and a new initial cycle is triggered using the [Azure portal](https://portal.azure.com/), or using the appropriate [Microsoft Graph API](/graph/api/resources/synchronization-overview) command. -* A new initial cycle is triggered by a change in attribute mappings or scoping filters. +* A new initial cycle triggers a change in attribute mappings or scoping filters. -* The provisioning process goes into quarantine due to a high error rate and stays in quarantine for more than four weeks then it is automatically disabled. +* The provisioning process goes into quarantine due to a high error rate and stays in quarantine for more than four weeks then it's automatically disabled. To review these events, and all other activities performed by the provisioning service, refer to Azure AD [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context). To understand how long the provisioning cycles take and monitor the progress of ### Gain insights from reports -Azure AD can provide [additional insights](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) into your organizationΓÇÖs user provisioning usage and operational health through audit logs and reports. +Azure AD can provide more insights into your organizationΓÇÖs user provisioning usage and operational health through audit logs and reports. To learn more about user insights, see [Check the status of user provisioning](application-provisioning-when-will-provisioning-finish-specific-user.md). Admins should check the provisioning summary report to monitor the operational health of the provisioning job. All activities performed by the provisioning service are recorded in the Azure AD audit logs. See [Tutorial: Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md). |
active-directory | Application Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md | For an identity provider to know that a user has access to a particular app, bot * Decide if you want to allow users to sign in only if they belong to your organization. This architecture is known as a single-tenant application. Or, you can allow users to sign in by using any work or school account, which is known as a multi-tenant application. You can also allow personal Microsoft accounts or a social account from LinkedIn, Google, and so on. * Request scope permissions. For example, you can request the "user.read" scope, which grants permission to read the profile of the signed-in user. * Define scopes that define access to your web API. Typically, when an app wants to access your API, it will need to request permissions to the scopes you define.-* Share a secret with the Microsoft identity platform that proves the app's identity. Using a secret is relevant in the case where the app is a confidential client application. A confidential client application is an application that can hold credentials securely. A trusted back-end server is required to store the credentials. +* Share a secret with the Microsoft identity platform that proves the app's identity. Using a secret is relevant in the case where the app is a confidential client application. A confidential [client application](developer-glossary.md#client-application) is an application that can hold credentials securely, like a [web client](developer-glossary.md#web-client). A trusted back-end server is required to store the credentials. -After the app is registered, it's given a unique identifier that it shares with the Microsoft identity platform when it requests tokens. If the app is a [confidential client application](developer-glossary.md#client-application), it will also share the secret or the public key depending on whether certificates or secrets were used. +After the app is registered, it's given a unique identifier that it shares with the Microsoft identity platform when it requests tokens. If the app is a confidential client application, it will also share the secret or the public key depending on whether certificates or secrets were used. The Microsoft identity platform represents applications by using a model that fulfills two main functions: The Microsoft identity platform: * Provides infrastructure for implementing app provisioning within the app developer's tenant, and to any other Azure AD tenant. * Handles user consent during token request time and facilitates the dynamic provisioning of apps across tenants. -*Consent* is the process of a resource owner granting authorization for a client application to access protected resources, under specific permissions, on behalf of the resource owner. The Microsoft identity platform enables: +[*Consent*](developer-glossary.md#consent) is the process of a resource owner granting authorization for a client application to access protected resources, under specific permissions, on behalf of the resource owner. The Microsoft identity platform enables: * Users and administrators to dynamically grant or deny consent for the app to access resources on their behalf. * Administrators to ultimately decide what apps are allowed to do and which users can use specific apps, and how the directory resources are accessed. ## Multi-tenant apps -In the Microsoft identity platform, an [application object](developer-glossary.md#application-object) describes an application. At deployment time, the Microsoft identity platform uses the application object as a blueprint to create a [service principal](developer-glossary.md#service-principal-object), which represents a concrete instance of an application within a directory or tenant. The service principal defines what the app can actually do in a specific target directory, who can use it, what resources it has access to, and so on. The Microsoft identity platform creates a service principal from an application object through [consent](developer-glossary.md#consent). +In the Microsoft identity platform, an [application object](developer-glossary.md#application-object) describes an application. At deployment time, the Microsoft identity platform uses the application object as a blueprint to create a [service principal](developer-glossary.md#service-principal-object), which represents a concrete instance of an application within a directory or tenant. The service principal defines what the app can actually do in a specific target directory, who can use it, what resources it has access to, and so on. The Microsoft identity platform creates a service principal from an application object through consent. The following diagram shows a simplified Microsoft identity platform provisioning flow driven by consent. It shows two tenants: *A* and *B*. |
active-directory | Quickstart V2 Netcore Daemon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md | -> ### MSAL.NET +> ### Microsoft.Identity.Web.MicrosoftGraph >-> Microsoft Authentication Library (MSAL, in the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) package) is the library that's used to sign in users and request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials). +> Microsoft Identity Web (in the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) package) is the library that's used to request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials). Given the daemon app in this quickstart calls Microsoft Graph, you install tje [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) package, which handles automatically authenticated requests to Microsoft Graph (and references itself Microsoft.Identity.Web.TokenAcquisition) >-> MSAL.NET can be installed by running the following command in the Visual Studio Package Manager Console: +> Microsoft.Identity.Web.MicrosoftGraph can be installed by running the following command in the Visual Studio Package Manager Console: > > ```dotnetcli-> dotnet add package Microsoft.Identity.Client +> dotnet add package Microsoft.Identity.Web.MicrosoftGraph > ``` >-> ### MSAL initialization +> ### Application initialization >-> Add the reference for MSAL by adding the following code: +> Add the reference for Microsoft.Identity.Web by adding the following code: > > ```csharp-> using Microsoft.Identity.Client; +> using Microsoft.Extensions.Configuration; +> using Microsoft.Extensions.DependencyInjection; +> using Microsoft.Graph; +> using Microsoft.Identity.Abstractions; +> using Microsoft.Identity.Web; > ``` >-> Then, initialize MSAL with the following: +> Then, initialize the app with the following: > > ```csharp-> IConfidentialClientApplication app; -> app = ConfidentialClientApplicationBuilder.Create(config.ClientId) -> .WithClientSecret(config.ClientSecret) -> .WithAuthority(new Uri(config.Authority)) -> .Build(); +> // Get the Token acquirer factory instance. By default it reads an appsettings.json +> // file if it exists in the same folder as the app (make sure that the +> // "Copy to Output Directory" property of the appsettings.json file is "Copy if newer"). +> TokenAcquirerFactory tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance(); +> +> // Configure the application options to be read from the configuration +> // and add the services you need (Graph, token cache) +> IServiceCollection services = tokenAcquirerFactory.Services; +> services.AddMicrosoftGraph(); +> // By default, you get an in-memory token cache. +> // For more token cache serialization options, see https://aka.ms/msal-net-token-cache-serialization +> +> // Resolve the dependency injection. +> var serviceProvider = tokenAcquirerFactory.Build(); +> ``` +> +> This code uses the configuration defined in the appsettings.json file: +> +> ```json +> { +> "AzureAd": { +> "Instance": "https://login.microsoftonline.com/", +> "TenantId": "[Enter here the tenantID or domain name for your Azure AD tenant]", +> "ClientId": "[Enter here the ClientId for your application]", +> "ClientCredentials": [ +> { +> "SourceType": "ClientSecret", +> "ClientSecret": "[Enter here a client secret for your application]" +> } +> ] +> } +> } > ``` > > | Element | Description | > |||-> | `config.ClientSecret` | The client secret created for the application in the Azure portal. | -> | `config.ClientId` | The application (client) ID for the application registered in the Azure portal. This value can be found on the app's **Overview** page in the Azure portal. | -> | `config.Authority` | (Optional) The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}` for the public cloud, where `{tenant}` is the name of the tenant or the tenant ID.| +> | `ClientSecret` | The client secret created for the application in the Azure portal. | +> | `ClientId` | The application (client) ID for the application registered in the Azure portal. This value can be found on the app's **Overview** page in the Azure portal. | +> | `Instance` | (Optional) The security token service (STS) could instance endpoint for the app to authenticate. It's usually `https://login.microsoftonline.com/` for the public cloud.| +> | `TenantId` | Name of the tenant or the tenant ID.| >-> For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.client.iconfidentialclientapplication). +> For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.web.tokenacquirerfactory). >-> ### Requesting tokens +> ### Calling Microsoft Graph > > To request a token by using the app's identity, use the `AcquireTokenForClient` method: > > ```csharp-> result = await app.AcquireTokenForClient(scopes) -> .ExecuteAsync(); +> GraphServiceClient graphServiceClient = serviceProvider.GetRequiredService<GraphServiceClient>(); +> var users = await graphServiceClient.Users +> .Request() +> .WithAppOnly() +> .GetAsync(); > ``` >-> |Element| Description | -> ||| -> | `scopes` | Contains the requested scopes. For confidential clients, this value should use a format similar to `{Application ID URI}/.default`. This format indicates that the requested scopes are the ones that are statically defined in the app object set in the Azure portal. For Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`. For custom web APIs, `{Application ID URI}` is defined in the Azure portal, under **Application Registration (Preview)** > **Expose an API**. | -> -> For more information, see the [reference documentation for `AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.confidentialclientapplication.acquiretokenforclient). -> > [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] > > ## Next steps |
active-directory | Reference Aadsts Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md | The `error` field has several possible values - review the protocol documentatio | AADSTS50143 | Session mismatch - Session is invalid because user tenant doesn't match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. | | AADSTS50144 | InvalidPasswordExpiredOnPremPassword - User's Active Directory password has expired. Generate a new password for the user or have the user use the self-service reset tool to reset their password. | | AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. Please contact the owner of the application. |+| AADSTS501461 | AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier, or use an application-specific signing key. | | AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter isn't valid. | | AADSTS501481 | The Code_Verifier doesn't match the code_challenge supplied in the authorization request.| | AADSTS501491 | InvalidCodeChallengeMethodInvalidSize - Invalid size of Code_Challenge parameter.| The `error` field has several possible values - review the protocol documentatio | AADSTS50194 | Application '{appId}'({appName}) isn't configured as a multi-tenant application. Usage of the /common endpoint isn't supported for such applications created after '{time}'. Use a tenant-specific endpoint or configure the application to be multi-tenant. | | AADSTS50196 | LoopDetected - A client loop has been detected. Check the app’s logic to ensure that token caching is implemented, and that error conditions are handled correctly. The app has made too many of the same request in too short a period, indicating that it is in a faulty state or is abusively requesting tokens. | | AADSTS50197 | ConflictingIdentities - The user could not be found. Try signing in again. |-| AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Because this is an "interaction_required" error, the client should do interactive auth. This occurs because a system webview has been used to request a token for a native application - the user must be prompted to ask if this was actually the app they meant to sign into. To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />chrome-extension:// (desktop Chrome browser only) | +| AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Interrupt is shown for all scheme redirects in mobile browsers. <br />No action required. The user was asked to confirm that this app is the application they intended to sign into. <br />This is a security feature that helps prevent spoofing attacks. This occurs because a system webview has been used to request a token for a native application. <br />To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />chrome-extension:// (desktop Chrome browser only) | | AADSTS51000 | RequiredFeatureNotEnabled - The feature is disabled. | | AADSTS51001 | DomainHintMustbePresent - Domain hint must be present with on-premises security identifier or on-premises UPN. | | AADSTS1000104| XCB2BResourceCloudNotAllowedOnIdentityTenant - Resource cloud {resourceCloud} isn't allowed on identity tenant {identityTenant}. {resourceCloud} - cloud instance which owns the resource. {identityTenant} - is the tenant where signing-in identity is originated from. | |
active-directory | Reference App Manifest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-manifest.md | Example: "keyCredentials": [ { "customKeyIdentifier":null,- "endDate":"2018-09-13T00:00:00Z", + "endDateTime":"2018-09-13T00:00:00Z", "keyId":"<guid>",- "startDate":"2017-09-12T00:00:00Z", + "startDateTime":"2017-09-12T00:00:00Z", "type":"AsymmetricX509Cert", "usage":"Verify", "value":null Example: "passwordCredentials": [ { "customKeyIdentifier": null,- "endDate": "2018-10-19T17:59:59.6521653Z", + "displayName": "Generated by App Service", + "endDateTime": "2022-10-19T17:59:59.6521653Z", + "hint": "Nsn", "keyId": "<guid>",- "startDate":"2016-10-19T17:59:59.6521653Z", - "value":null + "secretText": null, + "startDateTime":"2022-10-19T17:59:59.6521653Z" } ], ``` Use the following comments section to provide feedback that helps refine and sha [IMPLICIT-GRANT]:v1-oauth2-implicit-grant-flow.md [INTEGRATING-APPLICATIONS-AAD]: ./quickstart-register-app.md [O365-PERM-DETAILS]: /graph/permissions-reference-[RBAC-CLOUD-APPS-AZUREAD]: http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/ +[RBAC-CLOUD-APPS-AZUREAD]: http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/ |
active-directory | Scenario Daemon Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-acquire-token.md | After you've constructed a confidential client application, you can acquire a to The scope to request for a client credential flow is the name of the resource followed by `/.default`. This notation tells Azure Active Directory (Azure AD) to use the *application-level permissions* declared statically during application registration. Also, these API permissions must be granted by a tenant administrator. -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) -```csharp -ResourceId = "someAppIDURI"; -var scopes = new [] { ResourceId+"/.default"}; +Here's an example of defining the scopes for the web API as part of the configuration in an [*appsettings.json*](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/2-Call-OwnApi/daemon-console/appsettings.json) file. This example is taken from the [.NET Core console daemon](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub. ++```json +{ + "AzureAd": { + // Same AzureAd section as before. + }, ++ "MyWebApi": { + "BaseUrl": "https://localhost:44372/", + "RelativePath": "api/TodoList", + "RequestAppToken": true, + "Scopes": [ "[Enter here the scopes for your web API]" ] + } +} ``` # [Java](#tab/java) In MSAL Python, the configuration file looks like this code snippet: } ``` +# [.NET (low level)](#tab/dotnet) ++```csharp +ResourceId = "someAppIDURI"; +var scopes = new [] { ResourceId+"/.default"}; +``` + ### Azure AD (v1.0) resources The scope used for client credentials should always be the resource ID followed ## AcquireTokenForClient API -To acquire a token for the app, you'll use `AcquireTokenForClient` or its equivalent, depending on the platform. +To acquire a token for the app, use `AcquireTokenForClient` or its equivalent, depending on the platform. -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) ++With Microsoft.Identity.Web, you don't need to acquire a token. You can use higher level APIs, as you see in [Calling a web API from a daemon application](scenario-daemon-call-api.md). If however you're using an SDK that requires a token, the following code snippet shows how to get this token. ```csharp-using Microsoft.Identity.Client; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Identity.Abstractions; +using Microsoft.Identity.Web; -// With client credentials flows, the scope is always of the shape "resource/.default" because the -// application permissions need to be set statically (in the portal or by PowerShell), and then granted by -// a tenant administrator. -string[] scopes = new string[] { "https://graph.microsoft.com/.default" }; +// In the Program.cs, acquire a token for your downstream API -AuthenticationResult result = null; -try -{ - result = await app.AcquireTokenForClient(scopes) - .ExecuteAsync(); -} -catch (MsalUiRequiredException ex) -{ - // The application doesn't have sufficient permissions. - // - Did you declare enough app permissions during app creation? - // - Did the tenant admin grant permissions to the application? -} -catch (MsalServiceException ex) when (ex.Message.Contains("AADSTS70011")) -{ - // Invalid scope. The scope has to be in the form "https://resourceurl/.default" - // Mitigation: Change the scope to be as expected. -} +var tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance(); +ITokenAcquirer acquirer = tokenAcquirerFactory.GetTokenAcquirer(); +AcquireTokenResult tokenResult = await acquirer.GetTokenForUserAsync(new[] { https://graph.microsoft.com/.default" }); +string accessToken = tokenResult.AccessToken; ``` -### AcquireTokenForClient uses the application token cache --In MSAL.NET, `AcquireTokenForClient` uses the application token cache. (All the other AcquireToken*XX* methods use the user token cache.) -Don't call `AcquireTokenSilent` before you call `AcquireTokenForClient`, because `AcquireTokenSilent` uses the *user* token cache. `AcquireTokenForClient` checks the *application* token cache itself and updates it. - # [Java](#tab/java) This code is extracted from the [MSAL Java dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/tree/dev/msal4j-sdk/src/samples/confidential-client/). private static IAuthenticationResult acquireToken() throws Exception { # [Node.js](#tab/nodejs) -The code snippet below illustrates token acquisition in an MSAL Node confidential client application: +The following code snippet illustrates token acquisition in an MSAL Node confidential client application: ```JavaScript try { else: print(result.get("correlation_id")) # You might need this when reporting a bug. ``` +# [.NET (low level)](#tab/dotnet) ++```csharp +using Microsoft.Identity.Client; ++// With client credentials flows, the scope is always of the shape "resource/.default" because the +// application permissions need to be set statically (in the portal or by PowerShell), and then granted by +// a tenant administrator. +string[] scopes = new string[] { "https://graph.microsoft.com/.default" }; ++AuthenticationResult result = null; +try +{ + result = await app.AcquireTokenForClient(scopes) + .ExecuteAsync(); +} +catch (MsalUiRequiredException ex) +{ + // The application doesn't have sufficient permissions. + // - Did you declare enough app permissions during app creation? + // - Did the tenant admin grant permissions to the application? +} +catch (MsalServiceException ex) when (ex.Message.Contains("AADSTS70011")) +{ + // Invalid scope. The scope has to be in the form "https://resourceurl/.default" + // Mitigation: Change the scope to be as expected. +} +``` ++### AcquireTokenForClient uses the application token cache ++In MSAL.NET, `AcquireTokenForClient` uses the application token cache. (All the other AcquireToken*XX* methods use the user token cache.) +Don't call `AcquireTokenSilent` before you call `AcquireTokenForClient`, because `AcquireTokenSilent` uses the *user* token cache. `AcquireTokenForClient` checks the *application* token cache itself and updates it. + ### Protocol If your daemon app calls your own web API and you weren't able to add an app per ## Next steps -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) Move on to the next article in this scenario,-[Calling a web API](./scenario-daemon-call-api.md?tabs=dotnet). +[Calling a web API](./scenario-daemon-call-api.md?tabs=idweb). # [Java](#tab/java) Move on to the next article in this scenario, Move on to the next article in this scenario, [Calling a web API](./scenario-daemon-call-api.md?tabs=python). +# [.NET low level](#tab/dotnet) ++Move on to the next article in this scenario, +[Calling a web API](./scenario-daemon-call-api.md?tabs=dotnet). |
active-directory | Scenario Daemon App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md | The following Microsoft libraries support daemon apps: ## Configure the authority -Daemon applications use application permissions rather than delegated permissions. So their supported account type can't be an account in any organizational directory or any personal Microsoft account (for example, Skype, Xbox, Outlook.com). There's no tenant admin to grant consent to a daemon application for a Microsoft personal account. You'll need to choose *accounts in my organization* or *accounts in any organization*. +Daemon applications use application permissions rather than delegated permissions. So their supported account type can't be an account in any organizational directory or any personal Microsoft account (for example, Skype, Xbox, Outlook.com). There's no tenant admin to grant consent to a daemon application for a Microsoft personal account. You need to choose *accounts in my organization* or *accounts in any organization*. The authority specified in the application configuration should be tenanted (specifying a tenant ID or a domain name associated with your organization). -Even if you want to provide a multitenant tool, you should use a tenant ID or domain name, and **not** `common` or `organizations` with this flow, because the service cannot reliably infer which tenant should be used. +Even if you want to provide a multitenant tool, you should use a tenant ID or domain name, and **not** `common` or `organizations` with this flow, because the service can't reliably infer which tenant should be used. ## Configure and instantiate the application The configuration file defines: - The client ID that you got from the application registration. - Either a client secret or a certificate. -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) -Here's an example of defining the configuration in an [*appsettings.json*](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/1-Call-MSGraph/daemon-console/appsettings.json) file. This example is taken from from the [.NET Core console daemon](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub. +Here's an example of defining the configuration in an [*appsettings.json*](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/1-Call-MSGraph/daemon-console/appsettings.json) file. This example is taken from the [.NET Core console daemon](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub. ```json {- "Instance": "https://login.microsoftonline.com/{0}", - "Tenant": "[Enter here the tenantID or domain name for your Azure AD tenant]", - "ClientId": "[Enter here the ClientId for your application]", - "ClientSecret": "[Enter here a client secret for your application]", - "CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]" + "AzureAd": { + "Instance": "https://login.microsoftonline.com/", + "TenantId": "[Enter here the tenantID or domain name for your Azure AD tenant]", + "ClientId": "[Enter here the ClientId for your application]", + "ClientCredentials": [ + { + "SourceType": "ClientSecret", + "ClientSecret": "[Enter here a client secret for your application]" + } + ] + } }+ ``` -You provide either a `ClientSecret` or a `CertificateName`. These settings are exclusive. +You provide a certificate instead of the client secret, or [workload identity federation](/azure/active-directory/workload-identities/workload-identity-federation.md) credentials. # [Java](#tab/java) When you build a confidential client with certificates, the [parameters.json](ht } ``` +# [.NET (low level) ](#tab/dotnet) ++Here's an example of defining the configuration in an [*appsettings.json*](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/1-Call-MSGraph/daemon-console/appsettings.json) file. This example is taken from the [.NET Core console daemon](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub. ++```json +{ + "Instance": "https://login.microsoftonline.com/{0}", + "Tenant": "[Enter here the tenantID or domain name for your Azure AD tenant]", + "ClientId": "[Enter here the ClientId for your application]", + "ClientSecret": "[Enter here a client secret for your application]", + "CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]" +} +``` ++You provide either a `ClientSecret` or a `CertificateName`. These settings are exclusive. + ### Instantiate the MSAL application The construction is different, depending on whether you're using client secrets Reference the MSAL package in your application code. -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) -Add the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) NuGet package to your application, and then add a `using` directive in your code to reference it. +Add the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) NuGet package to your application. +Alternatively, if you want to call Microsoft Graph, add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) package. +Your project could be as follows. The *appsettings.json* file needs to be copied to the output directory. -In MSAL.NET, the confidential client application is represented by the `IConfidentialClientApplication` interface. +```xml +<Project Sdk="Microsoft.NET.Sdk"> ++ <PropertyGroup> + <OutputType>Exe</OutputType> + <TargetFramework>net7.0</TargetFramework> + <RootNamespace>daemon_console</RootNamespace> + </PropertyGroup> ++ <ItemGroup> + <PackageReference Include="Microsoft.Identity.Web.MicrosoftGraph" Version="2.6.1" /> + </ItemGroup> ++ <ItemGroup> + <None Update="appsettings.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + </None> + </ItemGroup> +</Project> +``` ++In the Program.cs file, add a `using` directive in your code to reference Microsoft.Identity.Web. ```csharp-using Microsoft.Identity.Client; -IConfidentialClientApplication app; +using Microsoft.Identity.Abstractions; +using Microsoft.Identity.Web; ``` # [Java](#tab/java) import com.microsoft.aad.msal4j.SilentParameters; # [Node.js](#tab/nodejs) -Simply install the packages by running `npm install` in the folder where *package.json* file resides. Then, import **msal-node** package: +Install the packages by running `npm install` in the folder where *package.json* file resides. Then, import **msal-node** package: ```JavaScript const msal = require('@azure/msal-node'); import sys import logging ``` -+# [.NET (low level)](#tab/dotnet) -#### Instantiate the confidential client application with a client secret --Here's the code to instantiate the confidential client application with a client secret: +Add the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) NuGet package to your application, and then add a `using` directive in your code to reference it. -# [.NET](#tab/dotnet) +In MSAL.NET, the confidential client application is represented by the `IConfidentialClientApplication` interface. ```csharp-app = ConfidentialClientApplicationBuilder.Create(config.ClientId) - .WithClientSecret(config.ClientSecret) - .WithAuthority(new Uri(config.Authority)) - .Build(); +using Microsoft.Identity.Client; +IConfidentialClientApplication app; ``` -The `Authority` is a concatenation of the cloud instance and the tenant ID, for example `https://login.microsoftonline.com/contoso.onmicrosoft.com` or `https://login.microsoftonline.com/eb1ed152-0000-0000-0000-32401f3f9abd`. In the *appsettings.json* file shown in the [Configuration file](#configuration-file) section, these are represented by the `Instance` and `Tenant` values, respectively. + -In the code sample the previous snippet was taken from, `Authority` is a property on the [AuthenticationConfig](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/ffc4a9f5d9bdba5303e98a1af34232b434075ac7/1-Call-MSGraph/daemon-console/AuthenticationConfig.cs#L61-L70) class, and is defined as such: +#### Instantiate the confidential client application with a client secret ++Here's the code to instantiate the confidential client application with a client secret: ++# [.NET](#tab/idweb) ```csharp-/// <summary> -/// URL of the authority -/// </summary> -public string Authority -{ - get + class Program {- return String.Format(CultureInfo.InvariantCulture, Instance, Tenant); + static async Task Main(string[] _) + { + // Get the Token acquirer factory instance. By default it reads an appsettings.json + // file if it exists in the same folder as the app (make sure that the + // "Copy to Output Directory" property of the appsettings.json file is "Copy if newer"). + TokenAcquirerFactory tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance(); ++ // Configure the application options to be read from the configuration + // and add the services you need (Graph, token cache) + IServiceCollection services = tokenAcquirerFactory.Services; + services.AddMicrosoftGraph(); + // By default, you get an in-memory token cache. + // For more token cache serialization options, see https://aka.ms/msal-net-token-cache-serialization ++ // Resolve the dependency injection. + var serviceProvider = tokenAcquirerFactory.Build(); ++ // ... + } }-} ``` +The configuration is read from the *appsettings.json*: + # [Java](#tab/java) ```Java app = msal.ConfidentialClientApplication( ) ``` +# [.NET (low level)](#tab/dotnet) ++```csharp +app = ConfidentialClientApplicationBuilder.Create(config.ClientId) + .WithClientSecret(config.ClientSecret) + .WithAuthority(new Uri(config.Authority)) + .Build(); +``` ++The `Authority` is a concatenation of the cloud instance and the tenant ID, for example `https://login.microsoftonline.com/contoso.onmicrosoft.com` or `https://login.microsoftonline.com/eb1ed152-0000-0000-0000-32401f3f9abd`. In the *appsettings.json* file shown in the [Configuration file](#configuration-file) section, instance and tenant are represented by the `Instance` and `Tenant` values, respectively. ++In the code sample the previous snippet was taken from, `Authority` is a property on the [AuthenticationConfig](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/ffc4a9f5d9bdba5303e98a1af34232b434075ac7/1-Call-MSGraph/daemon-console/AuthenticationConfig.cs#L61-L70) class, and is defined as such: ++```csharp +/// <summary> +/// URL of the authority +/// </summary> +public string Authority +{ + get + { + return String.Format(CultureInfo.InvariantCulture, Instance, Tenant); + } +} +``` + #### Instantiate the confidential client application with a client certificate Here's the code to build an application with a certificate: -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) -```csharp -X509Certificate2 certificate = ReadCertificate(config.CertificateName); -app = ConfidentialClientApplicationBuilder.Create(config.ClientId) - .WithCertificate(certificate) - .WithAuthority(new Uri(config.Authority)) - .Build(); +The code itself is exactly the same. The certificate is described in the configuration. +There are many ways to get the certificate. For details see https://aka.ms/ms-id-web-certificates. +Here's how you would do to get your certificate from KeyVault. Microsoft identity delegates to Azure Identity's DefaultAzureCredential, and used Managed identity when available to access the certificate from KeyVault. You can debug your application locally as it, then, uses your developer credentials. ++```json + "ClientCredentials": [ + { + "SourceType": "KeyVault", + "KeyVaultUrl": "https://yourKeyVaultUrl.vault.azure.net", + "KeyVaultCertificateName": "NameOfYourCertificate" + } ``` # [Java](#tab/java) app = msal.ConfidentialClientApplication( ) ``` ---#### Advanced scenario: Instantiate the confidential client application with client assertions - # [.NET](#tab/dotnet) -Instead of a client secret or a certificate, the confidential client application can also prove its identity by using client assertions. --MSAL.NET has two methods to provide signed assertions to the confidential client app: --- `.WithClientAssertion()`-- `.WithClientClaims()`--When you use `WithClientAssertion`, provide a signed JWT. This advanced scenario is detailed in [Client assertions](msal-net-client-assertions.md). - ```csharp-string signedClientAssertion = ComputeAssertion(); +X509Certificate2 certificate = ReadCertificate(config.CertificateName); app = ConfidentialClientApplicationBuilder.Create(config.ClientId)- .WithClientAssertion(signedClientAssertion) - .Build(); + .WithCertificate(certificate) + .WithAuthority(new Uri(config.Authority)) + .Build(); ``` -When you use `WithClientClaims`, MSAL.NET will produce a signed assertion that contains the claims expected by Azure AD, plus additional client claims that you want to send. -This code shows how to do that: + -```csharp -string ipAddress = "192.168.1.2"; -var claims = new Dictionary<string, string> { { "client_ip", ipAddress } }; -X509Certificate2 certificate = ReadCertificate(config.CertificateName); -app = ConfidentialClientApplicationBuilder.Create(config.ClientId) - .WithAuthority(new Uri(config.Authority)) - .WithClientClaims(certificate, claims) - .Build(); -``` +#### Advanced scenario: Instantiate the confidential client application with client assertions -Again, for details, see [Client assertions](msal-net-client-assertions.md). +# [.NET](#tab/idweb) ++Instead of a client secret or a certificate, the confidential client application can also prove its identity by using client assertions. See +[CredentialDescription](/dotnet/api/microsoft.identity.abstractions.credentialdescription?view=msal-model-dotnet-latest) for details. # [Java](#tab/java) app = msal.ConfidentialClientApplication( For details, see the MSAL Python reference documentation for [ConfidentialClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.ClientApplication.__init__). +# [.NET (low level)](#tab/dotnet) ++Instead of a client secret or a certificate, the confidential client application can also prove its identity by using client assertions. ++MSAL.NET has two methods to provide signed assertions to the confidential client app: ++- `.WithClientAssertion()` +- `.WithClientClaims()` ++When you use `WithClientAssertion`, provide a signed JWT. This advanced scenario is detailed in [Client assertions](msal-net-client-assertions.md). ++```csharp +string signedClientAssertion = ComputeAssertion(); +app = ConfidentialClientApplicationBuilder.Create(config.ClientId) + .WithClientAssertion(signedClientAssertion) + .Build(); +``` ++When you use `WithClientClaims`, MSAL.NET produces a signed assertion that contains the claims expected by Azure AD, plus additional client claims that you want to send. +This code shows how to do that: ++```csharp +string ipAddress = "192.168.1.2"; +var claims = new Dictionary<string, string> { { "client_ip", ipAddress } }; +X509Certificate2 certificate = ReadCertificate(config.CertificateName); +app = ConfidentialClientApplicationBuilder.Create(config.ClientId) + .WithAuthority(new Uri(config.Authority)) + .WithClientClaims(certificate, claims) + .Build(); +``` ++Again, for details, see [Client assertions](msal-net-client-assertions.md). + ## Next steps -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) Move on to the next article in this scenario,-[Acquire a token for the app](./scenario-daemon-acquire-token.md?tabs=dotnet). +[Acquire a token for the app](./scenario-daemon-acquire-token.md?tabs=idweb). # [Java](#tab/java) Move on to the next article in this scenario, Move on to the next article in this scenario, [Acquire a token for the app](./scenario-daemon-acquire-token.md?tabs=python). +# [.NET (low level)](#tab/dotnet) ++Move on to the next article in this scenario, +[Acquire a token for the app](./scenario-daemon-acquire-token.md?tabs=dotnet). + |
active-directory | Scenario Daemon Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-call-api.md | -.NET daemon apps can call a web API. .NET daemon apps can also call several pre-approved web APIs. +.NET daemon apps can call a web API. .NET daemon apps can also call several preapproved web APIs. ## Calling a web API from a daemon application Here's how to use the token to call an API: -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) +Microsoft.Identity.Web abstracts away the complexity of MSAL.NET. It provides you with higher-level APIs that handle the internals of MSAL.NET for you, such as processing Conditional Access errors, caching. ++Here's the Program.cs of the daemon app calling a downstream API: ++```csharp +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Identity.Abstractions; +using Microsoft.Identity.Web; ++// In the Program.cs, acquire a token for your downstream API ++var tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance(); +tokenAcquirerFactory.Services.AddDownstreamApi("MyApi", + tokenAcquirerFactory.Configuration.GetSection("MyWebApi")); +var sp = tokenAcquirerFactory.Build(); ++var api = sp.GetRequiredService<IDownstreamApi>(); +var result = await api.GetForAppAsync<IEnumerable<TodoItem>>("MyApi"); +Console.WriteLine($"result = {result?.Count()}"); +``` ++Here's the Program.cs of a daemon app that calls Microsoft Graph: ++```csharp +var tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance(); +tokenAcquirerFactory.Services.AddMicrosoftGraph(); +var serviceProvider = tokenAcquirerFactory.Build(); +try +{ + GraphServiceClient graphServiceClient = serviceProvider.GetRequiredService<GraphServiceClient>(); + var users = await graphServiceClient.Users + .Request() + .WithAppOnly() + .GetAsync(); + Console.WriteLine($"{users.Count} users"); + Console.ReadKey(); +} +catch (Exception ex) { Console.WriteLine("We could not retrieve the user's list: " + $"{ex}"); } +``` # [Java](#tab/java) http_headers = {'Authorization': 'Bearer ' + result['access_token'], data = requests.get(endpoint, headers=http_headers, stream=False).json() ``` +# [.NET low level](#tab/dotnet) ++ ## Calling several APIs -For daemon apps, the web APIs that you call need to be pre-approved. There's no incremental consent with daemon apps. (There's no user interaction.) The tenant admin needs to provide consent in advance for the application and all the API permissions. If you want to call several APIs, acquire a token for each resource, each time calling `AcquireTokenForClient`. MSAL will use the application token cache to avoid unnecessary service calls. +For daemon apps, the web APIs that you call need to be preapproved. There's no incremental consent with daemon apps. (There's no user interaction.) The tenant admin needs to provide consent in advance for the application and all the API permissions. If you want to call several APIs, acquire a token for each resource, each time calling `AcquireTokenForClient`. MSAL uses the application token cache to avoid unnecessary service calls. ## Next steps -# [.NET](#tab/dotnet) +# [.NET](#tab/idweb) Move on to the next article in this scenario,-[Move to production](./scenario-daemon-production.md?tabs=dotnet). +[Move to production](./scenario-daemon-production.md?tabs=idweb). # [Java](#tab/java) Move on to the next article in this scenario, Move on to the next article in this scenario, [Move to production](./scenario-daemon-production.md?tabs=python). -+# [.NET low level](#tab/dotnet) ++Move on to the next article in this scenario, +[Move to production](./scenario-daemon-production.md?tabs=dotnet). ++ |
active-directory | Web App Quickstart Portal Node Js Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md | Last updated 04/12/2023 # Portal quickstart for React SPA -> [!div renderon="portal" class="sxs-lookup"] > In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure AD CIAM.-> ++> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"] > ## Prerequisites > > * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) > * [Node.js](https://nodejs.org/en/download/) > * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor >-> ## Download the code -> -> > [!div class="nextstepaction"] -> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/react-quickstart.zip) -> > ## Run the sample > > 1. Unzip the downloaded file. >-> 1. Locate the folder that contains the `package.json` file in your terminal, then run the following command: +> 1. In your terminal, locate the folder that contains the `package.json` file, then run the following command: > > ```console > npm install && npm start Last updated 04/12/2023 > > 1. Open your browser and visit `http://locahost:3000`. >-> 1. Select the **Sign-in** link on the navigation bar. +> 1. Select the **Sign-in** link on the navigation bar, then follow the prompts. > |
active-directory | Code Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md | +# Customer intent: As a tenant administrator, I want to bulk-invite external users to an organization from email addresses that I've stored in a .csv file. # Azure Active Directory B2B collaboration code and PowerShell samples ## PowerShell example -You can bulk-invite external users to an organization from email addresses that you've stored in a .CSV file. +You can bulk-invite external users to an organization from email addresses that you've stored in a .csv file. -1. Prepare the .CSV file - Create a new CSV file and name it invitations.csv. In this example, the file is saved in C:\data, and contains the following information: +1. Prepare the .csv file + Create a new .csv file and name it invitations.csv. In this example, the file is saved in C:\data, and contains the following information: Name | InvitedUserEmailAddress | -- This cmdlet sends an invitation to the email addresses in invitations.csv. More ## Code sample -The code sample illustrates how to call the invitation API and get the redemption URL. Use the redemption URL to send a custom invitation email. The email can be composed with an HTTP client, so you can customize how it looks and send it through the Microsoft Graph API. +The code sample illustrates how to call the invitation API and get the redemption URL. Use the redemption URL to send a custom invitation email. You can compose the email with an HTTP client, so you can customize how it looks and send it through the Microsoft Graph API. # [HTTP](#tab/http) const inviteRedeemUrl = await sendInvite(); ## Next steps -- [What is Azure AD B2B collaboration?](what-is-b2b.md)+- [Samples for guest user self-service sign-up](code-samples-self-service-sign-up.md) |
active-directory | 3 Secure Access Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md | Generally, organizations customize policy, however consider the following parame ## Access control methods -Some features, for example entitlement management, are available with an Azure AD Premium 2 (P2) license. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD P2 licenses. Learn more in the following entitlement management section. +Some features, for example entitlement management, are available with an Azure AD Premium 2 (P2) license. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD Premium P2 licenses. Learn more in the following entitlement management section. > [!NOTE]-> Licenses are for one user. Therefore users, administrators, and business owners can have delegated access control. This scenario can occur with Azure AD P2 or Microsoft 365 E5, and you don't have to enable licenses for all users. The first 50,000 external users are free. If you don't enable P2 licenses for other internal users, they can't use entitlement management. +> Licenses are for one user. Therefore users, administrators, and business owners can have delegated access control. This scenario can occur with Azure AD Premium P2 or Microsoft 365 E5, and you don't have to enable licenses for all users. The first 50,000 external users are free. If you don't enable P2 licenses for other internal users, they can't use entitlement management. Other combinations of Microsoft 365, Office 365, and Azure AD have functionality to manage external users. See, [Microsoft 365 guidance for security & compliance](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance). -## Govern access with Azure AD P2 and Microsoft 365 or Office 365 E5 +## Govern access with Azure AD Premium P2 and Microsoft 365 or Office 365 E5 -Azure AD P2 and Microsoft 365 E5 have all the security and governance tools. +Azure AD Premium P2, included in Microsoft 365 E5, has additional security and governance capabilities. ### Provision, sign-in, review access, and deprovision access Use entitlement management to provision and deprovision access to groups and tea Learn more: [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) -## Governance with Azure AD P1, Microsoft 365, Office 365 E3 +## Manage access with Azure AD P1, Microsoft 365, Office 365 E3 ### Provision, sign-in, review access, and deprovision access |
active-directory | 9 Secure Access Teams Sharepoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md | Guest invite settings determine who invites guests and how guests are invited. T * The IT team: * After training is complete, the IT team grants the Guest Inviter role- * To enable access reviews, assigns Azure AD P2 license to the Microsoft 365 group owner + * Ensures there are sufficient Azure AD Premium P2 licenses for the Microsoft 365 group owners who will review * Creates a Microsoft 365 group access review * Confirms access reviews occur * Removes users added to SharePoint |
active-directory | Active Directory Deployment Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md | The following list describes features and services for productivity gains in hyb * See, [B2B collaboration overview](../external-identities/what-is-b2b.md) * See, [Plan an Azure Active Directory B2B collaboration deployment](../fundamentals/secure-external-access-resources.md) -## Governance and reporting +## Identity Governance and reporting -Use the following list to learn about governance and reporting. Items in the list refer to Microsoft Entra. +Use the following list to learn about identity governance and reporting. Items in the list refer to Microsoft Entra. Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https://www.microsoft.com/en-us/security/blog/?p=114039) Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https:/ * See, [Plan a Microsoft Entra access reviews deployment](../governance/deploy-access-reviews.md) * **Identity governance** - Meet your compliance and risk management objectives for access to critical applications. Learn how to enforce accurate access. * See, [Govern access for applications in your environment](../governance/identity-governance-applications-prepare.md)- -Learn more: [Azure governance documentation](../../governance/index.yml) ## Best practices for a pilot |
active-directory | Active Directory Ops Guide Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md | As you review your list, you may find you need to either assign an owner for tas #### Owner recommended reading - [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)-- [Governance in Azure](../../governance/index.yml) ## Credentials management Conditional Access is an essential tool for improving the security posture of yo - Plan for [break glass](../roles/security-planning.md#break-glass-what-to-do-in-an-emergency) accounts without MFA controls - Ensure a consistent experience across Microsoft 365 client applications, for example, Teams, OneDrive, Outlook, etc.) by implementing the same set of controls for services such as Exchange Online and SharePoint Online - Assignment to policies should be implemented through groups, not individuals-- Do regular reviews of the exception groups used in policies to limit the time users are out of the security posture. If you own Azure AD P2, then you can use access reviews to automate the process+- Do regular reviews of the exception groups used in policies to limit the time users are out of the security posture. If you own Azure AD Premium P2, then you can use access reviews to automate the process #### Conditional Access recommended reading |
active-directory | Active Directory Ops Guide Govern | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-govern.md | As you review your list, you may find you need to either assign an owner for tas #### Owner recommended reading - [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)-- [Governance in Azure](../../governance/index.yml) ### Configuration changes testing |
active-directory | Active Directory Ops Guide Iam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-iam.md | As you review your list, you may find you need to either assign an owner for tas #### Assigning owners recommended reading - [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)-- [Governance in Azure](../../governance/index.yml) ## On-premises identity synchronization |
active-directory | Active Directory Ops Guide Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-intro.md | This operations reference guide describes the checks and actions you should take Some recommendations here might not be applicable to all customersΓÇÖ environment, for example, AD FS best practices might not apply if your organization uses password hash sync. > [!NOTE]-> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their identity practices as Microsoft products and services evolve over time. Recommendations can change when organizations subscribe to a different Azure AD Premium license. For example, Azure AD Premium P2 will include more governance recommendations. +> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their identity practices as Microsoft products and services evolve over time. Recommendations can change when organizations subscribe to a different Azure AD Premium license. ## Stakeholders |
active-directory | Active Directory Ops Guide Ops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-ops.md | As you review your list, you may find you need to either assign an owner for tas #### Owners recommended reading - [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)-- [Governance in Azure](../../governance/index.yml) ## Hybrid management |
active-directory | Concept Secure Remote Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md | The following table is intended to highlight the key actions for the following l The following table is intended to highlight the key actions for the following license subscriptions: -- Azure Active Directory Premium P2 (Azure AD P2)+- Azure Active Directory Premium P2 - Enterprise Mobility + Security (EMS E5) - Microsoft 365 (E5, A5) |
active-directory | Whats Deprecated Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-deprecated-azure-ad.md | Use the following table to learn about changes including deprecations, retiremen |Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023| |[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| |[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|-|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|On GA| +|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Sometime after GA| |[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023| |[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023| |[Azure AD PowerShell and MSOnline PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023| |
active-directory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md | Privileged Identity Management (PIM) administrators can now export all active an For more information, see [View activity and audit history for Azure resource roles in PIM](../privileged-identity-management/azure-pim-resource-rbac.md). --## November/December 2018 --### Users removed from synchronization scope no longer switch to cloud-only accounts --**Type:** Fixed -**Service category:** User Management -**Product capability:** Directory -->[!Important] ->We've heard and understand your frustration because of this fix. Therefore, we've reverted this change until such time that we can make the fix easier for you to implement in your organization. --We've fixed a bug in which the DirSyncEnabled flag of a user would be erroneously switched to **False** when the Active Directory Domain Services (AD DS) object was excluded from synchronization scope and then moved to the Recycle Bin in Azure AD on the following sync cycle. As a result of this fix, if the user is excluded from sync scope and afterwards restored from Azure AD Recycle Bin, the user account remains as synchronized from on-premises AD, as expected, and cannot be managed in the cloud since its source of authority (SoA) remains as on-premises AD. --Prior to this fix, there was an issue when the DirSyncEnabled flag was switched to False. It gave the wrong impression that these accounts were converted to cloud-only objects and that the accounts could be managed in the cloud. However, the accounts still retained their SoA as on-premises and all synchronized properties (shadow attributes) coming from on-premises AD. This condition caused multiple issues in Azure AD and other cloud workloads (like Exchange Online) that expected to treat these accounts as synchronized from AD but were now behaving like cloud-only accounts. --At this time, the only way to truly convert a synchronized-from-AD account to cloud-only account is by disabling DirSync at the tenant level, which triggers a backend operation to transfer the SoA. This type of SoA change requires (but is not limited to) cleaning all the on-premises related attributes (such as LastDirSyncTime and shadow attributes) and sending a signal to other cloud workloads to have its respective object converted to a cloud-only account too. --This fix consequently prevents direct updates on the ImmutableID attribute of a user synchronized from AD, which in some scenarios in the past were required. By design, the ImmutableID of an object in Azure AD, as the name implies, is meant to be immutable. New features implemented in Azure AD Connect Health and Azure AD Connect Synchronization client are available to address such scenarios: --- **Large-scale ImmutableID update for many users in a staged approach**-- For example, you need to do a lengthy AD DS inter-forest migration. Solution: Use Azure AD Connect to **Configure Source Anchor** and, as the user migrates, copy the existing ImmutableID values from Azure AD into the local AD DS user's ms-DS-Consistency-Guid attribute of the new forest. For more information, see [Using ms-DS-ConsistencyGuid as sourceAnchor](../hybrid/plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor). --- **Large-scale ImmutableID updates for many users in one shot**-- For example, while implementing Azure AD Connect you make a mistake, and now you need to change the SourceAnchor attribute. Solution: Disable DirSync at the tenant level and clear all the invalid ImmutableID values. For more information, see [Turn off directory synchronization for Office 365](/office365/enterprise/turn-off-directory-synchronization). --- **Rematch on-premises user with an existing user in Azure AD**- For example, a user that has been re-created in AD DS generates a duplicate in Azure AD account instead of rematching it with an existing Azure AD account (orphaned object). Solution: Use Azure AD Connect Health in the Azure portal to remap the Source Anchor/ImmutableID. For more information, see [Orphaned object scenario](../hybrid/how-to-connect-health-diagnose-sync-errors.md#orphaned-object-scenario). --### Breaking Change: Updates to the audit and sign-in logs schema through Azure Monitor --**Type:** Changed feature -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --We're currently publishing both the Audit and Sign-in log streams through Azure Monitor, so you can seamlessly integrate the log files with your SIEM tools or with Log Analytics. Based on your feedback, and in preparation for this feature's general availability announcement, we're making the following changes to our schema. These schema changes and its related documentation updates will happen by the first week of January. --#### New fields in the Audit schema -We're adding a new **Operation Type** field, to provide the type of operation performed on the resource. For example, **Add**, **Update**, or **Delete**. --#### Changed fields in the Audit schema -The following fields are changing in the Audit schema: --|Field name|What changed|Old values|New Values| -|-||-|-| -|Category|This was the **Service Name** field. It's now the **Audit Categories** field. **Service Name** has been renamed to the **loggedByService** field.|<ul><li>Account Provisioning</li><li>Core Directory</li><li>Self-service Password Reset</li></ul>|<ul><li>User Management</li><li>Group Management</li><li>App Management</li></ul>| -|targetResources|Includes **TargetResourceType** at the top level.| |<ul><li>Policy</li><li>App</li><li>User</li><li>Group</li></ul>| -|loggedByService|Provides the name of the service that generated the audit log.|Null|<ul><li>Account Provisioning</li><li>Core Directory</li><li>Self-service password reset</li></ul>| -|Result|Provides the result of the audit logs. Previously, this was enumerated, but we now show the actual value.|<ul><li>0</li><li>1</li></ul>|<ul><li>Success</li><li>Failure</li></ul>| --#### Changed fields in the Sign-in schema -The following fields are changing in the Sign-in schema: --|Field name|What changed|Old values|New Values| -|-||-|-| -|appliedConditionalAccessPolicies|This was the **conditionalaccessPolicies** field. It's now the **appliedConditionalAccessPolicies** field.|No change|No change| -|conditionalAccessStatus|Provides the result of the Conditional Access Policy Status at sign-in. Previously, this was enumerated, but we now show the actual value.|<ul><li>0</li><li>1</li><li>2</li><li>3</li></ul>|<ul><li>Success</li><li>Failure</li><li>Not Applied</li><li>Disabled</li></ul>| -|appliedConditionalAccessPolicies: result|Provides the result of the individual Conditional Access Policy Status at sign-in. Previously, this was enumerated, but we now show the actual value.|<ul><li>0</li><li>1</li><li>2</li><li>3</li></ul>|<ul><li>Success</li><li>Failure</li><li>Not Applied</li><li>Disabled</li></ul>| --For more information about the schema, see [Interpret the Azure AD audit logs schema in Azure Monitor (preview)](../reports-monitoring/overview-reports.md) ----### Identity Protection improvements to the supervised machine learning model and the risk score engine --**Type:** Changed feature -**Service category:** Identity Protection -**Product capability:** Risk Scores --Improvements to the Identity Protection-related user and sign-in risk assessment engine can help to improve user risk accuracy and coverage. Administrators may notice that user risk level is no longer directly linked to the risk level of specific detections, and that there's an increase in the number and level of risky sign-in events. --Risk detections are now evaluated by the supervised machine learning model, which calculates user risk by using additional features of the user's sign-ins and a pattern of detections. Based on this model, the administrator might find users with high risk scores, even if detections associated with that user are of low or medium risk. ----### Administrators can reset their own password using the Microsoft Authenticator app (Public preview) --**Type:** Changed feature -**Service category:** Self Service Password Reset -**Product capability:** User Authentication --Azure AD administrators can now reset their own password using the Microsoft Authenticator app notifications or a code from any mobile authenticator app or hardware token. To reset their own password, administrators will now be able to use two of the following methods: --- Microsoft Authenticator app notification--- Other mobile authenticator app / Hardware token code--- Email--- Phone call--- Text message--For more information about using the Microsoft Authenticator app to reset passwords, see [Azure AD self-service password reset - Mobile app and SSPR (Preview)](../authentication/concept-sspr-howitworks.md#mobile-app-and-sspr) ----### New Azure AD Cloud Device Administrator role (Public preview) --**Type:** New feature -**Service category:** Device Registration and Management -**Product capability:** Access control --Administrators can assign users to the new Cloud Device Administrator role to perform cloud device administrator tasks. Users assigned the Cloud Device Administrators role can enable, disable, and delete devices in Azure AD, along with being able to read Windows 10 BitLocker keys (if present) in the Azure portal. --For more information about roles and permissions, see [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md) ----### Manage your devices using the new activity timestamp in Azure AD (Public preview) --**Type:** New feature -**Service category:** Device Registration and Management -**Product capability:** Device Lifecycle Management --We realize that over time you must refresh and retire your organizations' devices in Azure AD, to avoid having stale devices in your environment. To help with this process, Azure AD now updates your devices with a new activity timestamp, helping you to manage your device lifecycle. --For more information about how to get and use this timestamp, see [How To: Manage the stale devices in Azure AD](../devices/manage-stale-devices.md) ----### Administrators can require users to accept a terms of use on each device --**Type:** New feature -**Service category:** Terms of use -**Product capability:** Governance --Administrators can now turn on the **Require users to consent on every device** option to require your users to accept your terms of use on every device they're using on your tenant. --For more information, see the [Per-device terms of use section of the Azure Active Directory terms of use feature](../conditional-access/terms-of-use.md#per-device-terms-of-use). ----### Administrators can configure a terms of use to expire based on a recurring schedule --**Type:** New feature -**Service category:** Terms of use -**Product capability:** Governance ---Administrators can now turn on the **Expire consents** option to make a terms of use expire for all of your users based on your specified recurring schedule. The schedule can be annually, bi-annually, quarterly, or monthly. After the terms of use expire, users must reaccept. --For more information, see the [Add terms of use section of the Azure Active Directory terms of use feature](../conditional-access/terms-of-use.md#add-terms-of-use). ----### Administrators can configure a terms of use to expire based on each user's schedule --**Type:** New feature -**Service category:** Terms of use -**Product capability:** Governance --Administrators can now specify a duration that user must reaccept a terms of use. For example, administrators can specify that users must reaccept a terms of use every 90 days. --For more information, see the [Add terms of use section of the Azure Active Directory terms of use feature](../conditional-access/terms-of-use.md#add-terms-of-use). ----### New Azure AD Privileged Identity Management (PIM) emails for Azure Active Directory roles --**Type:** New feature -**Service category:** Privileged Identity Management -**Product capability:** Privileged Identity Management --Customers using Azure AD Privileged Identity Management (PIM) can now receive a weekly digest email, including the following information for the last seven days: --- Overview of the top eligible and permanent role assignments--- Number of users activating roles--- Number of users assigned to roles in PIM--- Number of users assigned to roles outside of PIM--- Number of users "made permanent" in PIM--For more information about PIM and the available email notifications, see [Email notifications in PIM](../privileged-identity-management/pim-email-notifications.md). ----### Group-based licensing is now generally available --**Type:** Changed feature -**Service category:** Other -**Product capability:** Directory --Group-based licensing is out of public preview and is now generally available. As part of this general release, we've made this feature more scalable and have added the ability to reprocess group-based licensing assignments for a single user and the ability to use group-based licensing with Office 365 E3/A3 licenses. --For more information about group-based licensing, see [What is group-based licensing in Azure Active Directory?](./active-directory-licensing-whatis-azure-portal.md) ----### New Federated Apps available in Azure AD app gallery - November 2018 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In November 2018, we've added these 26 new apps with Federation support to the app gallery: --[CoreStack](https://cloud.corestack.io/site/login), [HubSpot](../saas-apps/hubspot-tutorial.md), [GetThere](../saas-apps/getthere-tutorial.md), [Gra-Pe](../saas-apps/grape-tutorial.md), [eHour](https://getehour.com/try-now), [Consent2Go](../saas-apps/consent2go-tutorial.md), [Appinux](../saas-apps/appinux-tutorial.md), [DriveDollar](https://azuremarketplace.microsoft.com/marketplace/apps/savitas.drivedollar-azuread?tab=Overview), [Useall](../saas-apps/useall-tutorial.md), [Infinite Campus](../saas-apps/infinitecampus-tutorial.md), [Alaya](https://alayagood.com), [HeyBuddy](../saas-apps/heybuddy-tutorial.md), [Wrike SAML](../saas-apps/wrike-tutorial.md), [Drift](../saas-apps/drift-tutorial.md), [Zenegy for Business Central 365](https://accounting.zenegy.com/), [Everbridge Member Portal](../saas-apps/everbridge-tutorial.md), [Ivanti Service Manager (ISM)](../saas-apps/ivanti-service-manager-tutorial.md), [Peakon](../saas-apps/peakon-tutorial.md), [Allbound SSO](../saas-apps/allbound-sso-tutorial.md), [Plex Apps - Classic Test](https://test.plexonline.com/signon), [Plex Apps – Classic](https://www.plexonline.com/signon), [Plex Apps - UX Test](https://test.cloud.plex.com/sso), [Plex Apps – UX](https://cloud.plex.com/sso), [Plex Apps – IAM](https://accounts.plex.com/) --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----## October 2018 --### Azure AD Logs now work with Azure Log Analytics (Public preview) --**Type:** New feature -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --We're excited to announce that you can now forward your Azure AD logs to Azure Log Analytics! This top-requested feature helps give you even better access to analytics for your business, operations, and security, as well as a way to help monitor your infrastructure. For more information, see the [Azure Active Directory Activity logs in Azure Log Analytics now available](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Azure-Active-Directory-Activity-logs-in-Azure-Log-Analytics-now/ba-p/274843) blog. ----### New Federated Apps available in Azure AD app gallery - October 2018 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In October 2018, we've added these 14 new apps with Federation support to the app gallery: --[My Award Points](../saas-apps/myawardpoints-tutorial.md), [Vibe HCM](../saas-apps/vibehcm-tutorial.md), ambyint, [MyWorkDrive](../saas-apps/myworkdrive-tutorial.md), [BorrowBox](../saas-apps/borrowbox-tutorial.md), Dialpad, [ON24 Virtual Environment](../saas-apps/on24-tutorial.md), [RingCentral](../saas-apps/ringcentral-tutorial.md), [Zscaler Three](../saas-apps/zscaler-three-tutorial.md), [Phraseanet](../saas-apps/phraseanet-tutorial.md), [Appraisd](../saas-apps/appraisd-tutorial.md), [Workspot Control](../saas-apps/workspotcontrol-tutorial.md), [Shuccho Navi](../saas-apps/shucchonavi-tutorial.md), [Glassfrog](../saas-apps/glassfrog-tutorial.md) --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### Azure AD Domain Services Email Notifications --**Type:** New feature -**Service category:** Azure AD Domain Services -**Product capability:** Azure AD Domain Services --Azure AD Domain Services provides alerts on the Azure portal about misconfigurations or problems with your managed domain. These alerts include step-by-step guides so you can try to fix the problems without having to contact support. --Starting in October, you'll be able to customize the notification settings for your managed domain so when new alerts occur, an email is sent to a designated group of people, eliminating the need to constantly check the portal for updates. --For more information, see [Notification settings in Azure AD Domain Services](../../active-directory-domain-services/notifications.md). ----### Azure portal supports using the ForceDelete domain API to delete custom domains --**Type:** Changed feature -**Service category:** Directory Management -**Product capability:** Directory --We're pleased to announce that you can now use the ForceDelete domain API to delete your custom domain names by asynchronously renaming references, like users, groups, and apps from your custom domain name (contoso.com) back to the initial default domain name (contoso.onmicrosoft.com). --This change helps you to more quickly delete your custom domain names if your organization no longer uses the name, or if you need to use the domain name with another Azure AD. --For more information, see [Delete a custom domain name](../enterprise-users/domains-manage.md#delete-a-custom-domain-name). ----## September 2018 --### Updated administrator role permissions for dynamic groups --**Type:** Fixed -**Service category:** Group Management -**Product capability:** Collaboration --We've fixed an issue so specific administrator roles can now create and update dynamic membership rules, without needing to be the owner of the group. --The roles are: --- Global administrator--- Intune administrator--- User administrator--For more information, see [Create a dynamic group and check status](../enterprise-users/groups-create-rule.md) ----### Simplified Single Sign-On (SSO) configuration settings for some third-party apps --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** SSO --We realize that setting up Single Sign-On (SSO) for Software as a Service (SaaS) apps can be challenging due to the unique nature of each apps configuration. We've built a simplified configuration experience to auto-populate the SSO configuration settings for the following third-party SaaS apps: --- Zendesk--- ArcGis Online--- Jamf Pro--To start using this one-click experience, go to the **Azure portal** > **SSO configuration** page for the app. For more information, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md) ----### Azure Active Directory - Where is your data located? page --**Type:** New feature -**Service category:** Other -**Product capability:** GoLocal --Select your company's region from the **Azure Active Directory - Where is your data located** page to view which Azure datacenter houses your Azure AD data at rest for all Azure AD services. You can filter the information by specific Azure AD services for your company's region. --To access this feature and for more information, see [Azure Active Directory - Where is your data located](https://aka.ms/AADDataMap). ----### New deployment plan available for the My Apps Access panel --**Type:** New feature -**Service category:** My Apps -**Product capability:** SSO --Check out the new deployment plan that's available for the My Apps Access panel (https://aka.ms/deploymentplans). -The My Apps Access panel provides users with a single place to find and access their apps. This portal also provides users with self-service opportunities, such as requesting access to apps and groups, or managing access to these resources on behalf of others. --For more information, see [What is the My Apps portal?](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) ----### New Troubleshooting and Support tab on the Sign-ins Logs page of the Azure portal --**Type:** New feature -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --The new **Troubleshooting and Support** tab on the **Sign-ins** page of the Azure portal, is intended to help admins and support engineers troubleshoot issues related to Azure AD sign-ins. This new tab provides the error code, error message, and remediation recommendations (if any) to help solve the problem. If you're unable to resolve the problem, we also give you a new way to create a support ticket using the **Copy to clipboard** experience, which populates the **Request ID** and **Date (UTC)** fields for the log file in your support ticket. -- ----### Enhanced support for custom extension properties used to create dynamic membership rules --**Type:** Changed feature -**Service category:** Group Management -**Product capability:** Collaboration --With this update, you can now select the **Get custom extension properties** link from the dynamic user group rule builder, enter your unique app ID, and receive the full list of custom extension properties to use when creating a dynamic membership rule for users. This list can also be refreshed to get any new custom extension properties for that app. --For more information about using custom extension properties for dynamic membership rules, see [Extension properties and custom extension properties](../enterprise-users/groups-dynamic-membership.md#extension-properties-and-custom-extension-properties) ----### New approved client apps for Azure AD app-based Conditional Access --**Type:** Plan for change -**Service category:** Conditional Access -**Product capability:** Identity security and protection --The following apps are on the list of [approved client apps](../conditional-access/concept-conditional-access-conditions.md#client-apps): --- Microsoft To-Do--- Microsoft Stream--For more information, see: --- [Azure AD app-based Conditional Access](../conditional-access/app-based-conditional-access.md)----### New support for Self-Service Password Reset from the Windows 7/8/8.1 Lock screen --**Type:** New feature -**Service category:** SSPR -**Product capability:** User Authentication --After you set up this new feature, your users will see a link to reset their password from the **Lock** screen of a device running Windows 7, Windows 8, or Windows 8.1. By clicking that link, the user is guided through the same password reset flow as through the web browser. --For more information, see [How to enable password reset from Windows 7, 8, and 8.1](../authentication/howto-sspr-windows.md) ----### Change notice: Authorization codes will no longer be available for reuse --**Type:** Plan for change -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --Starting on November 15, 2018, Azure AD will stop accepting previously used authentication codes for apps. This security change helps to bring Azure AD in line with the OAuth specification and will be enforced on both the v1 and v2 endpoints. --If your app reuses authorization codes to get tokens for multiple resources, we recommend that you use the code to get a refresh token, and then use that refresh token to acquire additional tokens for other resources. Authorization codes can only be used once, but refresh tokens can be used multiple times across multiple resources. An app that attempts to reuse an authentication code during the OAuth code flow will get an invalid_grant error. --For this and other protocols-related changes, see [the full list of what's new for authentication](../develop/reference-breaking-changes.md). ----### New Federated Apps available in Azure AD app gallery - September 2018 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In September 2018, we've added these 16 new apps with Federation support to the app gallery: --[Uberflip](../saas-apps/uberflip-tutorial.md), [Comeet Recruiting Software](../saas-apps/comeetrecruitingsoftware-tutorial.md), [Workteam](../saas-apps/workteam-tutorial.md), [ArcGIS Enterprise](../saas-apps/arcgisenterprise-tutorial.md), [Nuclino](../saas-apps/nuclino-tutorial.md), [JDA Cloud](../saas-apps/jdacloud-tutorial.md), [Snowflake](../saas-apps/snowflake-tutorial.md), NavigoCloud, [Figma](../saas-apps/figma-tutorial.md), join.me, [ZephyrSSO](../saas-apps/zephyrsso-tutorial.md), [Silverback](../saas-apps/silverback-tutorial.md), Riverbed Xirrus EasyPass, [Rackspace SSO](../saas-apps/rackspacesso-tutorial.md), Enlyft SSO for Azure, SurveyMonkey, [Convene](../saas-apps/convene-tutorial.md), [dmarcian](../saas-apps/dmarcian-tutorial.md) --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### Support for additional claims transformations methods --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** SSO --We've introduced new claim transformation methods, ToLower() and ToUpper(), which can be applied to SAML tokens from the SAML-based **Single Sign-On Configuration** page. --For more information, see [How to customize claims issued in the SAML token for enterprise applications in Azure AD](../develop/active-directory-saml-claims-customization.md) ----### Updated SAML-based app configuration UI (preview) --**Type:** Changed feature -**Service category:** Enterprise Apps -**Product capability:** SSO --As part of our updated SAML-based app configuration UI, you'll get: --- An updated walkthrough experience for configuring your SAML-based apps.--- More visibility about what's missing or incorrect in your configuration.--- The ability to add multiple email addresses for expiration certificate notification.--- New claim transformation methods, ToLower() and ToUpper(), and more.--- A way to upload your own token signing certificate for your enterprise apps.--- A way to set the NameID Format for SAML apps, and a way to set the NameID value as Directory Extensions.--To turn on this updated view, click the **Try out our new experience** link from the top of the **Single Sign-On** page. For more information, see [Tutorial: Configure SAML-based single sign-on for an application with Azure Active Directory](../manage-apps/view-applications-portal.md). ----## August 2018 --### Changes to Azure Active Directory IP address ranges --**Type:** Plan for change -**Service category:** Other -**Product capability:** Platform --We're introducing larger IP ranges to Azure AD, which means if you've configured Azure AD IP address ranges for your firewalls, routers, or Network Security Groups, you'll need to update them. We're making this update so you won't have to change your firewall, router, or Network Security Groups IP range configurations again when Azure AD adds new endpoints. --Network traffic is moving to these new ranges over the next two months. To continue with uninterrupted service, you must add these updated values to your IP Addresses before September 10, 2018: --- 20.190.128.0/18--- 40.126.0.0/18--We strongly recommend not removing the old IP Address ranges until all of your network traffic has moved to the new ranges. For updates about the move and to learn when you can remove the old ranges, see [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2). ----### Change notice: Authorization codes will no longer be available for reuse --**Type:** Plan for change -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --Starting on November 15, 2018, Azure AD will stop accepting previously used authentication codes for apps. This security change helps to bring Azure AD in line with the OAuth specification and will be enforced on both the v1 and v2 endpoints. --If your app reuses authorization codes to get tokens for multiple resources, we recommend that you use the code to get a refresh token, and then use that refresh token to acquire additional tokens for other resources. Authorization codes can only be used once, but refresh tokens can be used multiple times across multiple resources. An app that attempts to reuse an authentication code during the OAuth code flow will get an invalid_grant error. --For this and other protocols-related changes, see [the full list of what's new for authentication](../develop/reference-breaking-changes.md). ----### Converged security info management for self-service password (SSPR) and multifactor authentication (MFA) --**Type:** New feature -**Service category:** SSPR -**Product capability:** User Authentication --This new feature helps people manage their security info (such as, phone number, mobile app, and so on) for SSPR and multifactor authentication (MFA) in a single location and experience; as compared to previously, where it was done in two different locations. --This converged experience also works for people using either SSPR or multifactor authentication (MFA). Additionally, if your organization doesn't enforce multifactor authentication (MFA) or SSPR registration, people can still register any multifactor authentication (MFA) or SSPR security info methods allowed by your organization from the My Apps portal. --This is an opt-in public preview. Administrators can turn on the new experience (if desired) for a selected group or for all users in a tenant. For more information about the converged experience, see the [Converged experience blog](https://cloudblogs.microsoft.com/enterprisemobility/2018/08/06/mfa-and-sspr-updates-now-in-public-preview/) ----### New HTTP-Only cookies setting in Azure AD Application proxy apps --**Type:** New feature -**Service category:** App Proxy -**Product capability:** Access Control --There's a new setting called, **HTTP-Only Cookies** in your Application Proxy apps. This setting helps provide extra security by including the HTTPOnly flag in the HTTP response header for both Application Proxy access and session cookies, stopping access to the cookie from a client-side script and further preventing actions like copying or modifying the cookie. Although this flag hasn't been used previously, your cookies have always been encrypted and transmitted using a TLS connection to help protect against improper modifications. --This setting isn't compatible with apps using ActiveX controls, such as Remote Desktop. If you're in this situation, we recommend that you turn off this setting. --For more information about the HTTP-Only Cookies setting, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md). ----### Privileged Identity Management (PIM) for Azure resources supports Management Group resource types --**Type:** New feature -**Service category:** Privileged Identity Management -**Product capability:** Privileged Identity Management --Just-In-Time activation and assignment settings can now be applied to Management Group resource types, just like you already do for Subscriptions, Resource Groups, and Resources (such as VMs, App Services, and more). In addition, anyone with a role that provides administrator access for a Management Group can discover and manage that resource in PIM. --For more information about PIM and Azure resources, see [Discover and manage Azure resources by using Privileged Identity Management](../privileged-identity-management/pim-resource-roles-discover-resources.md) ----### Application access (preview) provides faster access to the Azure portal --**Type:** New feature -**Service category:** Privileged Identity Management -**Product capability:** Privileged Identity Management --Today, when activating a role using PIM, it can take over 10 minutes for the permissions to take effect. If you choose to use Application access, which is currently in public preview, administrators can access the Azure portal as soon as the activation request completes. --Currently, Application access only supports the Azure portal experience and Azure resources. For more information about PIM and Application access, see [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md) ----### New Federated Apps available in Azure AD app gallery - August 2018 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In August 2018, we've added these 16 new apps with Federation support to the app gallery: --[Hornbill](../saas-apps/hornbill-tutorial.md), [Bridgeline Unbound](../saas-apps/bridgelineunbound-tutorial.md), [Sauce Labs - Mobile and Web Testing](../saas-apps/saucelabs-mobileandwebtesting-tutorial.md), [Meta Networks Connector](../saas-apps/metanetworksconnector-tutorial.md), [Way We Do](../saas-apps/waywedo-tutorial.md), [Spotinst](../saas-apps/spotinst-tutorial.md), [ProMaster (by Inlogik)](../saas-apps/promaster-tutorial.md), SchoolBooking, [4me](../saas-apps/4me-tutorial.md), [Dossier](../saas-apps/dossier-tutorial.md), [N2F - Expense reports](../saas-apps/n2f-expensereports-tutorial.md), [Comm100 Live Chat](../saas-apps/comm100livechat-tutorial.md), [SafeConnect](../saas-apps/safeconnect-tutorial.md), [ZenQMS](../saas-apps/zenqms-tutorial.md), [eLuminate](../saas-apps/eluminate-tutorial.md), [Dovetale](../saas-apps/dovetale-tutorial.md). --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### Native Tableau support is now available in Azure AD Application Proxy --**Type:** Changed feature -**Service category:** App Proxy -**Product capability:** Access Control --With our update from the OpenID Connect to the OAuth 2.0 Code Grant protocol for our pre-authentication protocol, you no longer have to do any additional configuration to use Tableau with Application Proxy. This protocol change also helps Application Proxy better support more modern apps by using only HTTP redirects, which are commonly supported in JavaScript and HTML tags. ----### New support to add Google as an identity provider for B2B guest users in Azure Active Directory (preview) --**Type:** New feature -**Service category:** B2B -**Product capability:** B2B/B2C --By setting up federation with Google in your organization, you can let invited Gmail users sign in to your shared apps and resources using their existing Google account, without having to create a personal Microsoft Account (MSAs) or an Azure AD account. --This is an opt-in public preview. For more information about Google federation, see [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md). ----## July 2018 --### Improvements to Azure Active Directory email notifications --**Type:** Changed feature -**Service category:** Other -**Product capability:** Identity lifecycle management --Azure Active Directory (Azure AD) emails now feature an updated design, as well as changes to the sender email address and sender display name, when sent from the following --- Azure AD Access Reviews-- Azure AD Connect Health-- Azure AD Identity Protection-- Azure AD Privileged Identity Management-- Enterprise App Expiring Certificate Notifications-- Enterprise App Provisioning Service Notifications--The email notifications will be sent from the following email address and display name: --- Email address: azure-noreply@microsoft.com-- Display name: Microsoft Azure--For an example of some of the new e-mail designs and more information, see [Email notifications in Azure AD PIM](../privileged-identity-management/pim-email-notifications.md). ----### Azure AD Activity Logs are now available through Azure Monitor --**Type:** New feature -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --The Azure AD Activity Logs are now available in public preview for the Azure Monitor (Azure's platform-wide monitoring service). Azure Monitor offers you long-term retention and seamless integration, in addition to these improvements: --- Long-term retention by routing your log files to your own Azure storage account.--- Seamless SIEM integration, without requiring you to write or maintain custom scripts.--- Seamless integration with your own custom solutions, analytics tools, or incident management solutions.--For more information about these new capabilities, see our blog [Azure AD activity logs in Azure Monitor diagnostics is now in public preview](https://cloudblogs.microsoft.com/enterprisemobility/2018/07/26/azure-ad-activity-logs-in-azure-monitor-diagnostics-now-in-public-preview/) and our documentation, [Azure Active Directory activity logs in Azure Monitor (preview)](../reports-monitoring/concept-activity-logs-azure-monitor.md). ----### Conditional Access information added to the Azure AD sign-ins report --**Type:** New feature -**Service category:** Reporting -**Product capability:** Identity Security & Protection --This update lets you see which policies are evaluated when a user signs in along with the policy outcome. In addition, the report now includes the type of client app used by the user, so you can identify legacy protocol traffic. Report entries can also now be searched for a correlation ID, which can be found in the user-facing error message and can be used to identify and troubleshoot the matching sign-in request. ----### View legacy authentications through Sign-ins activity logs --**Type:** New feature -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --With the introduction of the **Client App** field in the Sign-in activity logs, customers can now see users that are using legacy authentications. Customers will be able to access this information using the Sign-ins Microsoft Graph API or through the Sign-in activity logs in Azure portal where you can use the **Client App** control to filter on legacy authentications. Check out the documentation for more details. ----### New Federated Apps available in Azure AD app gallery - July 2018 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In July 2018, we've added these 16 new apps with Federation support to the app gallery: --[Innovation Hub](../saas-apps/innovationhub-tutorial.md), [Leapsome](../saas-apps/leapsome-tutorial.md), [Certain Admin SSO](../saas-apps/certainadminsso-tutorial.md), PSUC Staging, [iPass SmartConnect](../saas-apps/ipasssmartconnect-tutorial.md), [Screencast-O-Matic](../saas-apps/screencast-tutorial.md), PowerSchool Unified Classroom, [Eli Onboarding](../saas-apps/elionboarding-tutorial.md), [Bomgar Remote Support](../saas-apps/bomgarremotesupport-tutorial.md), [Nimblex](../saas-apps/nimblex-tutorial.md), [Imagineer WebVision](../saas-apps/imagineerwebvision-tutorial.md), [Insight4GRC](../saas-apps/insight4grc-tutorial.md), [SecureW2 JoinNow Connector](../saas-apps/securejoinnow-tutorial.md), [Kanbanize](../saas-apps/kanbanize-tutorial.md), [SmartLPA](../saas-apps/smartlpa-tutorial.md), [Skills Base](../saas-apps/skillsbase-tutorial.md) --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### New user provisioning SaaS app integrations - July 2018 --**Type:** New feature -**Service category:** App Provisioning -**Product capability:** 3rd Party Integration --Azure AD allows you to automate the creation, maintenance, and removal of user identities in SaaS applications such as Dropbox, Salesforce, ServiceNow, and more. For July 2018, we have added user provisioning support for the following applications in the Azure AD app gallery: --- [Cisco WebEx](../saas-apps/cisco-webex-provisioning-tutorial.md)--- [Bonusly](../saas-apps/bonusly-provisioning-tutorial.md)--For a list of all applications that support user provisioning in the Azure AD gallery, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). ----### Connect Health for Sync - An easier way to fix orphaned and duplicate attribute sync errors --**Type:** New feature -**Service category:** AD Connect -**Product capability:** Monitoring & Reporting --Azure AD Connect Health introduces self-service remediation to help you highlight and fix sync errors. This feature troubleshoots duplicated attribute sync errors and fixes objects that are orphaned from Azure AD. This diagnosis has the following benefits: --- Narrows down duplicated attribute sync errors, providing specific fixes--- Applies a fix for dedicated Azure AD scenarios, resolving errors in a single step--- No upgrade or configuration is required to turn on and use this feature--For more information, see [Diagnose and remediate duplicated attribute sync errors](../hybrid/how-to-connect-health-diagnose-sync-errors.md) ----### Visual updates to the Azure AD and MSA sign-in experiences --**Type:** Changed feature -**Service category:** Azure AD -**Product capability:** User Authentication --We've updated the UI for Microsoft's online services sign-in experience, such as for Office 365 and Azure. This change makes the screens less cluttered and more straightforward. For more information about this change, see the [Upcoming improvements to the Azure AD sign-in experience](https://cloudblogs.microsoft.com/enterprisemobility/2018/04/04/upcoming-improvements-to-the-azure-ad-sign-in-experience/) blog. ----### New release of Azure AD Connect - July 2018 --**Type:** Changed feature -**Service category:** App Provisioning -**Product capability:** Identity Lifecycle Management --The latest release of Azure AD Connect includes: --- Bug fixes and supportability updates--- General Availability of the Ping-Federate integration--- Updates to the latest SQL 2012 client--For more information about this update, see [Azure AD Connect: Version release history](../hybrid/reference-connect-version-history.md) ----### Updates to the terms of use end-user UI --**Type:** Changed feature -**Service category:** Terms of use -**Product capability:** Governance --We're updating the acceptance string in the TOU end-user UI. --**Current text.** In order to access [tenantName] resources, you must accept the terms of use.<br>**New text.** In order to access [tenantName] resource, you must read the terms of use. --**Current text:** Choosing to accept means that you agree to all of the above terms of use.<br>**New text:** Please select Accept to confirm that you have read and understood the terms of use. ----### Pass-through Authentication supports legacy protocols and applications --**Type:** Changed feature -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --Pass-through Authentication now supports legacy protocols and apps. The following limitations are now fully supported: --- User sign-ins to legacy Office client applications, Office 2010 and Office 2013, without requiring modern authentication.--- Access to calendar sharing and free/busy information in Exchange hybrid environments on Office 2010 only.--- User sign-ins to Skype for Business client applications without requiring modern authentication.--- User sign-ins to PowerShell version 1.0.--- The Apple Device Enrollment Program (Apple DEP), using the iOS Setup Assistant.----### Converged security info management for self-service password reset and MultiFactor Authentication --**Type:** New feature -**Service category:** SSPR -**Product capability:** User Authentication --This new feature lets users manage their security info (for example, phone number, email address, mobile app, and so on) for self-service password reset (SSPR) and multifactor authentication (MFA) in a single experience. Users will no longer have to register the same security info for SSPR and multifactor authentication (MFA) in two different experiences. This new experience also applies to users who have either SSPR or multifactor authentication (MFA). --If an organization isn't enforcing multifactor authentication (MFA) or SSPR registration, users can register their security info through the **My Apps** portal. From there, users can register any methods enabled for multifactor authentication (MFA) or SSPR. --This is an opt-in public preview. Admins can turn on the new experience (if desired) for a selected group of users or all users in a tenant. ----### Use the Microsoft Authenticator app to verify your identity when you reset your password --**Type:** Changed feature -**Service category:** SSPR -**Product capability:** User Authentication --This feature lets non-admins verify their identity while resetting a password using a notification or code from Microsoft Authenticator (or any other authenticator app). After admins turn on this self-service password reset method, users who have registered a mobile app through aka.ms/mfasetup or aka.ms/setupsecurityinfo can use their mobile app as a verification method while resetting their password. --Mobile app notification can only be turned on as part of a policy that requires two methods to reset your password. ----## June 2018 --### Change notice: Security fix to the delegated authorization flow for apps using Azure AD Activity Logs API --**Type:** Plan for change -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --Due to our stronger security enforcement, we've had to make a change to the permissions for apps that use a delegated authorization flow to access [Azure AD Activity Logs APIs](../reports-monitoring/concept-reporting-api.md). This change will occur by **June 26, 2018**. --If any of your apps use Azure AD Activity Log APIs, follow these steps to ensure the app doesn't break after the change happens. --**To update your app permissions** --1. Sign in to the Azure portal, select **Azure Active Directory**, and then select **App Registrations**. -2. Select your app that uses the Azure AD Activity Logs API, select **Settings**, select **Required permissions**, and then select the **Windows Azure Active Directory** API. -3. In the **Delegated permissions** area of the **Enable access** blade, select the box next to **Read directory** data, and then select **Save**. -4. Select **Grant permissions**, and then select **Yes**. -- >[!Note] - >You must be a Global administrator to grant permissions to the app. --For more information, see the [Grant permissions](../reports-monitoring/howto-configure-prerequisites-for-reporting-api.md#grant-permissions) area of the Prerequisites to access the Azure AD reporting API article. ----### Configure TLS settings to connect to Azure AD services for PCI DSS compliance --**Type:** New feature -**Service category:** N/A -**Product capability:** Platform --Transport Layer Security (TLS) is a protocol that provides privacy and data integrity between two communicating applications and is the most widely deployed security protocol used today. --The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has determined that early versions of TLS and Secure Sockets Layer (SSL) must be disabled in favor of enabling new and more secure app protocols, with compliance starting on **June 30, 2018**. This change means that if you connect to Azure AD services and require PCI DSS-compliance, you must disable TLS 1.0. Multiple versions of TLS are available, but TLS 1.2 is the latest version available for Azure Active Directory Services. We highly recommend moving directly to TLS 1.2 for both client/server and browser/server combinations. --Out-of-date browsers might not support newer TLS versions, such as TLS 1.2. To see which versions of TLS are supported by your browser, go to the [Qualys SSL Labs](https://www.ssllabs.com/) site and select **Test your browser**. We recommend you upgrade to the latest version of your web browser and preferably enable only TLS 1.2. --**To enable TLS 1.2, by browser** --- **Microsoft Edge and Internet Explorer (both are set using Internet Explorer)**-- 1. Open Internet Explorer, select **Tools** > **Internet Options** > **Advanced**. - 2. In the **Security** area, select **use TLS 1.2**, and then select **OK**. - 3. Close all browser windows and restart Internet Explorer. --- **Google Chrome**-- 1. Open Google Chrome, type *chrome://settings/* into the address bar, and press **Enter**. - 2. Expand the **Advanced** options, go to the **System** area, and select **Open proxy settings**. - 3. In the **Internet Properties** box, select the **Advanced** tab, go to the **Security** area, select **use TLS 1.2**, and then select **OK**. - 4. Close all browser windows and restart Google Chrome. --- **Mozilla Firefox**-- 1. Open Firefox, type *about:config* into the address bar, and then press **Enter**. - 2. Search for the term, *TLS*, and then select the **security.tls.version.max** entry. - 3. Set the value to **3** to force the browser to use up to version TLS 1.2, and then select **OK**. -- >[!NOTE] - >Firefox version 60.0 supports TLS 1.3, so you can also set the security.tls.version.max value to **4**. -- 4. Close all browser windows and restart Mozilla Firefox. ----### New Federated Apps available in Azure AD app gallery - June 2018 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In June 2018, we've added these 15 new apps with Federation support to the app gallery: --[Skytap](../saas-apps/skytap-tutorial.md), [Settling music](../saas-apps/settlingmusic-tutorial.md), [SAML 1.1 Token enabled LOB App](../saas-apps/saml-tutorial.md), [Supermood](../saas-apps/supermood-tutorial.md), [Autotask](../saas-apps/autotaskendpointbackup-tutorial.md), [Endpoint Backup](../saas-apps/autotaskendpointbackup-tutorial.md), [Skyhigh Networks](../saas-apps/skyhighnetworks-tutorial.md), Smartway2, [TonicDM](../saas-apps/tonicdm-tutorial.md), [Moconavi](../saas-apps/moconavi-tutorial.md), [Zoho One](../saas-apps/zohoone-tutorial.md), [SharePoint on-premises](../saas-apps/sharepoint-on-premises-tutorial.md), [ForeSee CX Suite](../saas-apps/foreseecxsuite-tutorial.md), [Vidyard](../saas-apps/vidyard-tutorial.md), [ChronicX](../saas-apps/chronicx-tutorial.md) --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### Azure AD Password Protection is available in public preview --**Type:** New feature -**Service category:** Identity Protection -**Product capability:** User Authentication --Use Azure AD Password Protection to help eliminate easily guessed passwords from your environment. Eliminating these passwords helps to lower the risk of compromise from a password spray type of attack. --Specifically, Azure AD Password Protection helps you: --- Protect your organization's accounts in both Azure AD and Windows Server Active Directory (AD).-- Stops your users from using passwords on a list of more than 500 of the most commonly used passwords, and over 1 million character substitution variations of those passwords.-- Administer Azure AD Password Protection from a single location in the Azure portal, for both Azure AD and on-premises Windows Server AD.--For more information about Azure AD Password Protection, see [Eliminate bad passwords in your organization](../authentication/concept-password-ban-bad.md). ----### New "all guests" Conditional Access policy template created during terms of use creation --**Type:** New feature -**Service category:** Terms of use -**Product capability:** Governance --During the creation of your terms of use, a new Conditional Access policy template is also created for "all guests" and "all apps". This new policy template applies the newly created ToU, streamlining the creation and enforcement process for guests. --For more information, see [Azure Active Directory Terms of use feature](../conditional-access/terms-of-use.md). ----### New "custom" Conditional Access policy template created during terms of use creation --**Type:** New feature -**Service category:** Terms of use -**Product capability:** Governance --During the creation of your terms of use, a new "custom" Conditional Access policy template is also created. This new policy template lets you create the ToU and then immediately go to the Conditional Access policy creation blade, without needing to manually navigate through the portal. --For more information, see [Azure Active Directory Terms of use feature](../conditional-access/terms-of-use.md). ----### New and comprehensive guidance about deploying Azure AD Multi-Factor Authentication --**Type:** New feature -**Service category:** Other -**Product capability:** Identity Security & Protection --We've released new step-by-step guidance about how to deploy Azure AD Multi-Factor Authentication (MFA) in your organization. --To view the Azure AD Multi-Factor Authentication (MFA) deployment guide, go to the [Identity Deployment Guides](./active-directory-deployment-plans.md) repo on GitHub. To provide feedback about the deployment guides, use the [Deployment Plan Feedback form](https://aka.ms/deploymentplanfeedback). If you have any questions about the deployment guides, contact us at [IDGitDeploy](mailto:idgitdeploy@microsoft.com). ----### Azure AD delegated app management roles are in public preview --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** Access Control --Admins can now delegate app management tasks without assigning the Global Administrator role. The new roles and capabilities are: --- **New standard Azure AD admin roles:**-- - **Application Administrator.** Grants the ability to manage all aspects of all apps, including registration, SSO settings, app assignments and licensing, App proxy settings, and consent (except to Azure AD resources). -- - **Cloud Application Administrator.** Grants all of the Application Administrator abilities, except for App proxy because it doesn't provide on-premises access. -- - **Application Developer.** Grants the ability to create app registrations, even if the **allow users to register apps** option is turned off. --- **Ownership (set up per-app registration and per-enterprise app, similar to the group ownership process:**-- - **App Registration Owner.** Grants the ability to manage all aspects of owned app registration, including the app manifest and adding additional owners. -- - **Enterprise App Owner.** Grants the ability to manage many aspects of owned enterprise apps, including SSO settings, app assignments, and consent (except to Azure AD resources). --For more information about public preview, see the [Azure AD delegated application management roles are in public preview!](https://cloudblogs.microsoft.com/enterprisemobility/2018/06/13/hallelujah-azure-ad-delegated-application-management-roles-are-in-public-preview/) blog. For more information about roles and permissions, see [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md). ----## May 2018 --### ExpressRoute support changes --**Type:** Plan for change -**Service category:** Authentications (Logins) -**Product capability:** Platform --Software as a Service offering, like Azure Active Directory (Azure AD) are designed to work best by going directly through the Internet, without requiring ExpressRoute or any other private VPN tunnels. Because of this, on **August 1, 2018**, we'll stop supporting ExpressRoute for Azure AD services using Azure public peering and Azure communities in Microsoft peering. Any services impacted by this change might notice Azure AD traffic gradually shifting from ExpressRoute to the Internet. --While we're changing our support, we also know there are still situations where you might need to use a dedicated set of circuits for your authentication traffic. Because of this, Azure AD will continue to support per-tenant IP range restrictions using ExpressRoute and services already on Microsoft peering with the "Other Office 365 Online services" community. If your services are impacted, but you require ExpressRoute, you must do the following: --- **If you're on Azure public peering.** Move to Microsoft peering and sign up for the **Other Office 365 Online services (12076:5100)** community. For more info about how to move from Azure public peering to Microsoft peering, see the [Move a public peering to Microsoft peering](../../expressroute/how-to-move-peering.md) article.--- **If you're on Microsoft peering.** Sign up for the **Other Office 365 Online service (12076:5100)** community. For more info about routing requirements, see the [Support for BGP communities section](../../expressroute/expressroute-routing.md#bgp) of the ExpressRoute routing requirements article.--If you must continue to use dedicated circuits, you'll need to talk to your Microsoft Account team about how to get authorization to use the **Other Office 365 Online service (12076:5100)** community. The MS Office-managed review board will verify whether you need those circuits and make sure you understand the technical implications of keeping them. Unauthorized subscriptions trying to create route filters for Office 365 will receive an error message. ----### Microsoft Graph APIs for administrative scenarios for TOU --**Type:** New feature -**Service category:** Terms of use -**Product capability:** Developer Experience --We've added Microsoft Graph APIs for administration operation of Azure AD terms of use. You are able to create, update, delete the terms of use object. ----### Add Azure AD multi-tenant endpoint as an identity provider in Azure AD B2C --**Type:** New feature -**Service category:** B2C - Consumer Identity Management -**Product capability:** B2B/B2C --Using custom policies, you can now add the Azure AD common endpoint as an identity provider in Azure AD B2C. This allows you to have a single point of entry for all Azure AD users that are signing into your applications. For more information, see [Azure Active Directory B2C: Allow users to sign in to a multi-tenant Azure AD identity provider using custom policies](../../active-directory-b2c/identity-provider-azure-ad-multi-tenant.md). ----### Use Internal URLs to access apps from anywhere with our My Apps Sign-in Extension and the Azure AD Application Proxy --**Type:** New feature -**Service category:** My Apps -**Product capability:** SSO --Users can now access applications through internal URLs even when outside your corporate network by using the My Apps Secure Sign-in Extension for Azure AD. This will work with any application that you have published using Azure AD Application Proxy, on any browser that also has the Access Panel browser extension installed. The URL redirection functionality is automatically enabled once a user logs into the extension. The extension is available for download on [Microsoft Edge](https://go.microsoft.com/fwlink/?linkid=845176), [Chrome](https://go.microsoft.com/fwlink/?linkid=866367). ----### Azure Active Directory - Data in Europe for Europe customers --**Type:** New feature -**Service category:** Other -**Product capability:** GoLocal --Customers in Europe require their data to stay in Europe and not replicated outside of European datacenters for meeting privacy and European laws. This [article](./active-directory-data-storage-eu.md) provides the specific details on what identity information will be stored within Europe and also provide details on information that will be stored outside European datacenters. ----### New user provisioning SaaS app integrations - May 2018 --**Type:** New feature -**Service category:** App Provisioning -**Product capability:** 3rd Party Integration --Azure AD allows you to automate the creation, maintenance, and removal of user identities in SaaS applications such as Dropbox, Salesforce, ServiceNow, and more. For May 2018, we have added user provisioning support for the following applications in the Azure AD app gallery: --- [BlueJeans](../saas-apps/bluejeans-provisioning-tutorial.md)--- [Cornerstone OnDemand](../saas-apps/cornerstone-ondemand-provisioning-tutorial.md)--- [Zendesk](../saas-apps/zendesk-provisioning-tutorial.md)--For a list of all applications that support user provisioning in the Azure AD gallery, see [https://aka.ms/appstutorial](../saas-apps/tutorial-list.md). ----### Azure AD access reviews of groups and app access now provides recurring reviews --**Type:** New feature -**Service category:** Access Reviews -**Product capability:** Governance --Access review of groups and apps is now generally available as part of Azure AD Premium P2. Administrators will be able to configure access reviews of group memberships and application assignments to automatically recur at regular intervals, such as monthly or quarterly. ----### Azure AD Activity logs (sign-ins and audit) are now available through MS Graph --**Type:** New feature -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --Azure AD Activity logs, which, includes Sign-ins and Audit logs, are now available through the Microsoft Graph API. We have exposed two end points through the Microsoft Graph API to access these logs. Check out our [documents](../reports-monitoring/concept-reporting-api.md) for programmatic access to Azure AD Reporting APIs to get started. ----### Improvements to the B2B redemption experience and leave an org --**Type:** New feature -**Service category:** B2B -**Product capability:** B2B/B2C --**Just in time redemption:** Once you share a resource with a guest user using B2B API – you don't need to send out a special invitation email. In most cases, the guest user can access the resource and will be taken through the redemption experience just in time. No more impact due to missed emails. No more asking your guest users "Did you click on that redemption link the system sent you?". This means once SPO uses the invitation manager – cloudy attachments can have the same canonical URL for all users – internal and external – in any state of redemption. --**Modern redemption experience:** No more split screen redemption landing page. Users will see a modern consent experience with the inviting organization's privacy statement, just like they do for third-party apps. --**Guest users can leave the org:** Once a user's relationship with an org is over, they can self-serve leaving the organization. No more calling the inviting org's admin to "be removed", no more raising support tickets. ----### New Federated Apps available in Azure AD app gallery - May 2018 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In May 2018, we've added these 18 new apps with Federation support to our app gallery: --[AwardSpring](../saas-apps/awardspring-tutorial.md), Infogix Data3Sixty Govern, [Yodeck](../saas-apps/infogix-tutorial.md), [Jamf Pro](../saas-apps/jamfprosamlconnector-tutorial.md), [KnowledgeOwl](../saas-apps/knowledgeowl-tutorial.md), [Envi MMIS](../saas-apps/envimmis-tutorial.md), [LaunchDarkly](../saas-apps/launchdarkly-tutorial.md), [Adobe Captivate Prime](../saas-apps/adobecaptivateprime-tutorial.md), [Montage Online](../saas-apps/montageonline-tutorial.md), [まなびポケット](../saas-apps/manabipocket-tutorial.md), OpenReel, [Arc Publishing - SSO](../saas-apps/arc-tutorial.md), [PlanGrid](../saas-apps/plangrid-tutorial.md), [iWellnessNow](../saas-apps/iwellnessnow-tutorial.md), [Proxyclick](../saas-apps/proxyclick-tutorial.md), [Riskware](../saas-apps/riskware-tutorial.md), [Flock](../saas-apps/flock-tutorial.md), [Reviewsnap](../saas-apps/reviewsnap-tutorial.md) --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). --For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### New step-by-step deployment guides for Azure Active Directory --**Type:** New feature -**Service category:** Other -**Product capability:** Directory --New, step-by-step guidance about how to deploy Azure Active Directory (Azure AD), including self-service password reset (SSPR), single sign-on (SSO), Conditional Access, App proxy, User provisioning, Active Directory Federation Services (ADFS) to Pass-through Authentication (PTA), and ADFS to Password hash sync (PHS). --To view the deployment guides, go to the [Identity Deployment Guides](./active-directory-deployment-plans.md) repo on GitHub. To provide feedback about the deployment guides, use the [Deployment Plan Feedback form](https://aka.ms/deploymentplanfeedback). If you have any questions about the deployment guides, contact us at [IDGitDeploy](mailto:idgitdeploy@microsoft.com). ----### Enterprise Applications Search - Load More Apps --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** SSO --Having trouble finding your applications / service principals? We've added the ability to load more applications in your enterprise applications all applications list. By default, we show 20 applications. You can now click, **Load more** to view additional applications. ----### The May release of AADConnect contains a public preview of the integration with PingFederate, important security updates, many bug fixes, and new great new troubleshooting tools. --**Type:** Changed feature -**Service category:** AD Connect -**Product capability:** Identity Lifecycle Management --The May release of AADConnect contains a public preview of the integration with PingFederate, important security updates, many bug fixes, and new great new troubleshooting tools. You can find the release notes [here](../hybrid/reference-connect-version-history.md). ----### Azure AD access reviews: auto-apply --**Type:** Changed feature -**Service category:** Access Reviews -**Product capability:** Governance --Access reviews of groups and apps are now generally available as part of Azure AD Premium P2. An administrator can configure to automatically apply the reviewer's changes to that group or app as the access review completes. The administrator can also specify what happens to the user's continued access if reviewers didn't respond, remove access, keep access, or take system recommendations. ----### ID tokens can no longer be returned using the query response_mode for new apps. --**Type:** Changed feature -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --Apps created on or after April 25, 2018 will no longer be able to request an **id_token** using the **query** response_mode. This brings Azure AD inline with the OIDC specifications and helps reduce your apps attack surface. Apps created before April 25, 2018 are not blocked from using the **query** response_mode with a response_type of **id_token**. The error returned, when requesting an id_token from Azure AD, is **AADSTS70007: 'query' is not a supported value of 'response_mode' when requesting a token**. --The **fragment** and **form_post** response_modes continue to work - when creating new application objects (for example, for App Proxy usage), ensure use of one of these response_modes before they create a new application. ----## April 2018 --### Azure AD B2C Access Token are GA --**Type:** New feature -**Service category:** B2C - Consumer Identity Management -**Product capability:** B2B/B2C --You can now access Web APIs secured by Azure AD B2C using access tokens. The feature is moving from public preview to GA. The UI experience to configure Azure AD B2C applications and web APIs has been improved, and other minor improvements were made. --For more information, see [Azure AD B2C: Requesting access tokens](../../active-directory-b2c/access-tokens.md). ----### Test single sign-on configuration for SAML-based applications --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** SSO --When configuring SAML-based SSO applications, you're able to test the integration on the configuration page. If you encounter an error during sign in, you can provide the error in the testing experience and Azure AD provides you with resolution steps to solve the specific issue. --For more information, see: --- [Configuring single sign-on to applications that are not in the Azure Active Directory application gallery](../manage-apps/view-applications-portal.md)-- [How to debug SAML-based single sign-on to applications in Azure Active Directory](../manage-apps/debug-saml-sso-issues.md)----### Azure AD terms of use now has per user reporting --**Type:** New feature -**Service category:** Terms of use -**Product capability:** Compliance --Administrators can now select a given ToU and see all the users that have consented to that ToU and what date/time it took place. --For more information, see the [Azure AD terms of use feature](../conditional-access/terms-of-use.md). ----### Azure AD Connect Health: Risky IP for AD FS extranet lockout protection --**Type:** New feature -**Service category:** Other -**Product capability:** Monitoring & Reporting --Connect Health now supports the ability to detect IP addresses that exceed a threshold of failed U/P logins on an hourly or daily basis. The capabilities provided by this feature are: --- Comprehensive report showing IP address and the number of failed logins generated on an hourly/daily basis with customizable threshold.-- Email-based alerts showing when a specific IP address has exceeded the threshold of failed U/P logins on an hourly/daily basis.-- A download option to do a detailed analysis of the data--For more information, see [Risky IP Report](../hybrid/how-to-connect-health-adfs.md). ----### Easy app config with metadata file or URL --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** SSO --On the Enterprise applications page, administrators can upload a SAML metadata file to configure SAML based sign-on for Azure AD Gallery and Non-Gallery application. --Additionally, you can use Azure AD application federation metadata URL to configure SSO with the targeted application. --For more information, see [Configuring single sign-on to applications that are not in the Azure Active Directory application gallery](../manage-apps/view-applications-portal.md). ----### Azure AD Terms of use now generally available --**Type:** New feature -**Service category:** Terms of use -**Product capability:** Compliance ---Azure AD terms of use have moved from public preview to generally available. --For more information, see the [Azure AD terms of use feature](../conditional-access/terms-of-use.md). ----### Allow or block invitations to B2B users from specific organizations --**Type:** New feature -**Service category:** B2B -**Product capability:** B2B/B2C ---You can now specify which partner organizations you want to share and collaborate with in Azure AD B2B Collaboration. To do this, you can choose to create list of specific allow or deny domains. When a domain is blocked using these capabilities, employees can no longer send invitations to people in that domain. --This helps you to control access to your resources, while enabling a smooth experience for approved users. --This B2B Collaboration feature is available for all Azure Active Directory customers and can be used in conjunction with Azure AD Premium features like Conditional Access and identity protection for more granular control of when and how external business users sign in and gain access. --For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md). ----### New federated apps available in Azure AD app gallery --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In April 2018, we've added these 13 new apps with Federation support to our app gallery: --Criterion HCM, [FiscalNote](../saas-apps/fiscalnote-tutorial.md), [Secret Server (On-Premises)](../saas-apps/secretserver-on-premises-tutorial.md), [Dynamic Signal](../saas-apps/dynamicsignal-tutorial.md), [mindWireless](../saas-apps/mindwireless-tutorial.md), [OrgChart Now](../saas-apps/orgchartnow-tutorial.md), [Ziflow](../saas-apps/ziflow-tutorial.md), [AppNeta Performance Monitor](../saas-apps/appneta-tutorial.md), [Elium](../saas-apps/elium-tutorial.md), [Fluxx Labs](../saas-apps/fluxxlabs-tutorial.md), [Cisco Cloud](../saas-apps/ciscocloud-tutorial.md), Shelf, [SafetyNet](../saas-apps/safetynet-tutorial.md) --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). --For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### Grant B2B users in Azure AD access to your on-premises applications (public preview) --**Type:** New feature -**Service category:** B2B -**Product capability:** B2B/B2C --As an organization that uses Azure Active Directory (Azure AD) B2B collaboration capabilities to invite guest users from partner organizations to your Azure AD, you can now provide these B2B users access to on-premises apps. These on-premises apps can use SAML-based authentication or integrated Windows authentication (IWA) with Kerberos constrained delegation (KCD). --For more information, see [Grant B2B users in Azure AD access to your on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). ----### Get SSO integration tutorials from the Azure Marketplace --**Type:** Changed feature -**Service category:** Other -**Product capability:** 3rd Party Integration --If an application that is listed in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps?page=1) supports SAML based single sign-on, clicking **Get it now** provides you with the integration tutorial associated with that application. ----### Faster performance of Azure AD automatic user provisioning to SaaS applications --**Type:** Changed feature -**Service category:** App Provisioning -**Product capability:** 3rd Party Integration --Previously, customers using the Azure Active Directory user provisioning connectors for SaaS applications (for example Salesforce, ServiceNow, and Box) could experience slow performance if their Azure AD tenants contained over 100,000 combined users and groups, and they were using user and group assignments to determine which users should be provisioned. --On April 2, 2018, significant performance enhancements were deployed to the Azure AD provisioning service that greatly reduce the amount of time needed to perform initial synchronizations between Azure Active Directory and target SaaS applications. --As a result, many customers that had initial synchronizations to apps that took many days or never completed, are now completing within a matter of minutes or hours. --For more information, see [What happens during provisioning?](../..//active-directory/app-provisioning/how-provisioning-works.md) ----### Self-service password reset from Windows 10 lock screen for hybrid Azure AD joined machines --**Type:** Changed feature -**Service category:** Self Service Password Reset -**Product capability:** User Authentication --We have updated the Windows 10 SSPR feature to include support for machines that are hybrid Azure AD joined. This feature is available in Windows 10 RS4 allows users to reset their password from the lock screen of a Windows 10 machine. Users who are enabled and registered for self-service password reset can utilize this feature. --For more information, see [Azure AD password reset from the login screen](../authentication/howto-sspr-windows.md). ----## March 2018 --### Certificate expire notification --**Type:** Fixed -**Service category:** Enterprise Apps -**Product capability:** SSO --Azure AD sends a notification when a certificate for a gallery or non-gallery application is about to expire. --Some users did not receive notifications for enterprise applications configured for SAML-based single sign-on. This issue was resolved. Azure AD sends notification for certificates expiring in 7, 30 and 60 days. You are able to see this event in the audit logs. --For more information, see: --- [Manage Certificates for federated single sign-on in Azure Active Directory](../manage-apps/manage-certificates-for-federated-single-sign-on.md)-- [Audit activity reports in the Azure portal](../reports-monitoring/concept-audit-logs.md)----### Twitter and GitHub identity providers in Azure AD B2C --**Type:** New feature -**Service category:** B2C - Consumer Identity Management -**Product capability:** B2B/B2C --You can now add Twitter or GitHub as an identity provider in Azure AD B2C. Twitter is moving from public preview to GA. GitHub is being released in public preview. --For more information, see [What is Azure AD B2B collaboration?](../external-identities/what-is-b2b.md). ----### Restrict browser access using Intune Managed Browser with Azure AD application-based Conditional Access for iOS and Android --**Type:** New feature -**Service category:** Conditional Access -**Product capability:** Identity Security & Protection --**Now in public preview!** --**Intune Managed Browser SSO:** Your employees can use single sign-on across native clients (like Microsoft Outlook) and the Intune Managed Browser for all Azure AD-connected apps. --**Intune Managed Browser Conditional Access Support:** You can now require employees to use the Intune Managed browser using application-based Conditional Access policies. --Read more about this in our [blog post](https://cloudblogs.microsoft.com/enterprisemobility/2018/03/15/the-intune-managed-browser-now-supports-azure-ad-sso-and-conditional-access/). --For more information, see: --- [Setup application-based Conditional Access](../conditional-access/app-based-conditional-access.md)--- [Configure managed browser policies](/mem/intune/apps/manage-microsoft-edge)----### App Proxy Cmdlets in PowerShell GA Module --**Type:** New feature -**Service category:** App Proxy -**Product capability:** Access Control --Support for Application Proxy cmdlets is now in the PowerShell GA Module! This does require you to stay updated on PowerShell modules - if you become more than a year behind, some cmdlets may stop working. --For more information, see [AzureAD](/powershell/module/Azuread/). ----### Office 365 native clients are supported by Seamless SSO using a non-interactive protocol --**Type:** New feature -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --User using Office 365 native clients (version 16.0.8730.xxxx and above) get a silent sign-on experience using Seamless SSO. This support is provided by the addition a non-interactive protocol (WS-Trust) to Azure AD. --For more information, see [How does sign-in on a native client with Seamless SSO work?](../hybrid/how-to-connect-sso-how-it-works.md#how-does-sign-in-on-a-native-client-with-seamless-sso-work) ----### Users get a silent sign-on experience, with Seamless SSO, if an application sends sign-in requests to Azure AD's tenant endpoints --**Type:** New feature -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --Users get a silent sign-on experience, with Seamless SSO, if an application (for example, `https://contoso.sharepoint.com`) sends sign-in requests to Azure AD's tenant endpoints - that is, `https://login.microsoftonline.com/contoso.com/<..>` or `https://login.microsoftonline.com/<tenant_ID>/<..>` - instead of Azure AD's common endpoint (`https://login.microsoftonline.com/common/<...>`). --For more information, see [Azure Active Directory Seamless Single Sign-On](../hybrid/how-to-connect-sso.md). ----### Need to add only one Azure AD URL, instead of two URLs previously, to users' Intranet zone settings to roll out Seamless SSO --**Type:** New feature -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --To roll out Seamless SSO to your users, you need to add only one Azure AD URL to the users' Intranet zone settings by using group policy in Active Directory: `https://autologon.microsoftazuread-sso.com`. Previously, customers were required to add two URLs. --For more information, see [Azure Active Directory Seamless Single Sign-On](../hybrid/how-to-connect-sso.md). ----### New Federated Apps available in Azure AD app gallery --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In March 2018, we've added these 15 new apps with Federation support to our app gallery: --[Boxcryptor](../saas-apps/boxcryptor-tutorial.md), [CylancePROTECT](../saas-apps/cylanceprotect-tutorial.md), Wrike, [SignalFx](../saas-apps/signalfx-tutorial.md), Assistant by FirstAgenda, [YardiOne](../saas-apps/yardione-tutorial.md), Vtiger CRM, inwink, [Amplitude](../saas-apps/amplitude-tutorial.md), [Spacio](../saas-apps/spacio-tutorial.md), [ContractWorks](../saas-apps/contractworks-tutorial.md), [Bersin](../saas-apps/bersin-tutorial.md), [Mercell](../saas-apps/mercell-tutorial.md), [Trisotech Digital Enterprise Server](../saas-apps/trisotechdigitalenterpriseserver-tutorial.md), [Qumu Cloud](../saas-apps/qumucloud-tutorial.md). --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). --For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### PIM for Azure Resources is generally available --**Type:** New feature -**Service category:** Privileged Identity Management -**Product capability:** Privileged Identity Management --If you are using Azure AD Privileged Identity Management for directory roles, you can now use PIM's time-bound access and assignment capabilities for Azure Resource roles such as Subscriptions, Resource Groups, Virtual Machines, and any other resource supported by Azure Resource Manager. Enforce multifactor authentication when activating roles Just-In-Time, and schedule activations in coordination with approved change windows. In addition, this release adds enhancements not available during public preview including an updated UI, approval workflows, and the ability to extend roles expiring soon and renew expired roles. --For more information, see [PIM for Azure resources (Preview)](../privileged-identity-management/azure-pim-resource-rbac.md) ----### Adding Optional Claims to your apps tokens (public preview) --**Type:** New feature -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --Your Azure AD app can now request custom or optional claims in JWTs or SAML tokens. These are claims about the user or tenant that are not included by default in the token, due to size or applicability constraints. This is currently in public preview for Azure AD apps on the v1.0 and v2.0 endpoints. See the documentation for information on what claims can be added and how to edit your application manifest to request them. --For more information, see [Optional claims in Azure AD](../develop/active-directory-optional-claims.md). ----### Azure AD supports PKCE for more secure OAuth flows --**Type:** New feature -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --Azure AD docs have been updated to note support for PKCE, which allows for more secure communication during the OAuth 2.0 Authorization Code grant flow. Both S256 and plaintext code_challenges are supported on the v1.0 and v2.0 endpoints. --For more information, see [Request an authorization code](../develop/v2-oauth2-auth-code-flow.md#request-an-authorization-code). ----### Support for provisioning all user attribute values available in the Workday Get_Workers API --**Type:** New feature -**Service category:** App Provisioning -**Product capability:** 3rd Party Integration --The public preview of inbound provisioning from Workday to Active Directory and Azure AD now supports the ability to extract and provisioning all attribute values available in the Workday Get_Workers API. This adds supports for hundreds of additional standard and custom attributes beyond the ones shipped with the initial version of the Workday inbound provisioning connector. --For more information, see: [Customizing the list of Workday user attributes](../saas-apps/workday-inbound-tutorial.md#customizing-the-list-of-workday-user-attributes) ----### Changing group membership from dynamic to static, and vice versa --**Type:** New feature -**Service category:** Group Management -**Product capability:** Collaboration --It is possible to change how membership is managed in a group. This is useful when you want to keep the same group name and ID in the system, so any existing references to the group are still valid; creating a new group would require updating those references. -We've updated the Azure portal to support this functionality. Now, customers can convert existing groups from dynamic membership to assigned membership and vice-versa. The existing PowerShell cmdlets are also still available. --For more information, see [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md) ----### Improved sign-out behavior with Seamless SSO --**Type:** Changed feature -**Service category:** Authentications (Logins) -**Product capability:** User Authentication --Previously, even if users explicitly signed out of an application secured by Azure AD, they would be automatically signed back in using Seamless SSO if they were trying to access an Azure AD application again within their corpnet from their domain joined devices. With this change, sign out is supported. This allows users to choose the same or different Azure AD account to sign back in with, instead of being automatically signed in using Seamless SSO. --For more information, see [Azure Active Directory Seamless Single Sign-On](../hybrid/how-to-connect-sso.md) ----### Application Proxy Connector Version 1.5.402.0 Released --**Type:** Changed feature -**Service category:** App Proxy -**Product capability:** Identity Security & Protection --This connector version is gradually being rolled out through November. This new connector version includes the following changes: --- The connector now sets domain level cookies instead subdomain level. This ensures a smoother SSO experience and avoids redundant authentication prompts.-- Support for chunked encoding requests-- Improved connector health monitoring-- Several bug fixes and stability improvements--For more information, see [Understand Azure AD Application Proxy connectors](../app-proxy/application-proxy-connectors.md). -- |
active-directory | Create Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md | Title: Create a Lifecycle Workflow- Azure AD (preview) -description: This article guides a user to creating a workflow using Lifecycle Workflows + Title: Create a lifecycle workflow (preview) - Azure AD +description: This article guides you in creating a lifecycle workflow. -# Create a Lifecycle workflow (Preview) -Lifecycle Workflows allows for tasks associated with the lifecycle process to be run automatically for users as they move through their life cycle in your organization. Workflows are made up of: +# Create a lifecycle workflow (preview) -Workflows can be created and customized for common scenarios using templates, or you can build a template from scratch without using a template. Currently if you use the Azure portal, a created workflow must be based off a template. If you wish to create a workflow without using a template, you must create it using Microsoft Graph. +Lifecycle workflows (preview) allow for tasks associated with the lifecycle process to be run automatically for users as they move through their lifecycle in your organization. Workflows consist of: ++- **Tasks**: Actions taken when a workflow is triggered. +- **Execution conditions**: The who and when of a workflow. These conditions define which users (scope) this workflow should run against, and when (trigger) the workflow should run. ++You can create and customize workflows for common scenarios by using templates, or you can build a workflow from scratch without using a template. Currently, if you use the Azure portal, any workflow that you create must be based on a template. If you want to create a workflow without using a template, use Microsoft Graph. ## Prerequisites -The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements). +The preview of lifecycle workflows requires Azure Active Directory (Azure AD) Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements). ++## Create a lifecycle workflow by using a template in the Azure portal -## Create a Lifecycle workflow using a template in the Azure portal +If you're using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. These templates include one for pre-hire common scenarios. -If you are using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. This means you can customize the pre-hire common scenario template. To create a workflow based on one of these templates using the Azure portal do the following steps: +To create a workflow based on a template: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Select **Azure Active Directory** and then select **Identity Governance**. +1. Select **Azure Active Directory** > **Identity Governance**. -1. In the left menu, select **Lifecycle Workflows (Preview)**. +1. On the left menu, select **Lifecycle Workflows (Preview)**. -1. select **Workflows (Preview)** +1. Select **Workflows (Preview)**. -1. On the workflows screen, select the workflow template that you want to use. - :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflows templates." lightbox="media/create-lifecycle-workflow/template-list.png"::: -1. Enter a unique display name and description for the workflow and select **Next**. - :::image type="content" source="media/create-lifecycle-workflow/template-basics.png" alt-text="Screenshot of workflow template basic information."::: +1. On the **Choose a workflow** page, select the workflow template that you want to use. -1. On the **configure scope** page select the **Trigger type** and execution conditions to be used for this workflow. For more information on what can be configured, see: [Configure scope](understanding-lifecycle-workflows.md#configure-scope). + :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflow templates." lightbox="media/create-lifecycle-workflow/template-list.png"::: +1. On the **Basics** tab, enter a unique display name and description for the workflow, and then select **Next**. -1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters) + :::image type="content" source="media/create-lifecycle-workflow/template-basics.png" alt-text="Screenshot of basic information about a workflow template."::: - :::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options."::: +1. On the **Configure scope** tab, select the trigger type and execution conditions to be used for this workflow. For more information on what you can configure, see [Configure scope](understanding-lifecycle-workflows.md#configure-scope). -1. To view your rule syntax, select the **View rule syntax** button. You can copy and paste multiple user property rules on this screen. For more detailed information on which properties that can be included see: [User Properties](/graph/aad-advanced-queries?tabs=http#user-properties). When you are finished adding rules, select **Next**. - :::image type="content" source="media/create-lifecycle-workflow/template-syntax.png" alt-text="Screenshot of workflow rule syntax."::: +1. Under **Rule**, enter values for **Property**, **Operator**, and **Value**. The following screenshot gives an example of a rule being set up for a sales department. For a full list of user properties that lifecycle workflows support, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters). -1. On the **Review tasks** page you can add a task to the template by selecting **Add task**. To enable an existing task on the list, select **enable**. You're also able to disable a task by selecting **disable**. To remove a task from the template, select **Remove** on the selected task. When you are finished with tasks for your workflow, select **Next**. + :::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of scope configuration options for a lifecycle workflow template."::: - :::image type="content" source="media/create-lifecycle-workflow/template-tasks.png" alt-text="Screenshot of adding tasks to templates."::: +1. To view your rule syntax, select the **View rule syntax** button. You can copy and paste multiple user property rules on the panel that appears. For more information on which properties you can include, see [User properties](/graph/aad-advanced-queries?tabs=http#user-properties). When you finish adding rules, select **Next**. -1. On the **Review+create** page you are able to review the workflow's settings. You can also choose whether or not to enable the schedule for the workflow. Select **Create** to create the workflow. + :::image type="content" source="media/create-lifecycle-workflow/template-syntax.png" alt-text="Screenshot of workflow rule syntax."::: ++1. On the **Review tasks** tab, you can add a task to the template by selecting **Add task**. To enable an existing task on the list, select **Enable**. To disable a task, select **Disable**. To remove a task from the template, select **Remove**. - :::image type="content" source="media/create-lifecycle-workflow/template-review.png" alt-text="Screenshot of reviewing and creating a template."::: + When you're finished with tasks for your workflow, select **Next: Review and create**. ++ :::image type="content" source="media/create-lifecycle-workflow/template-tasks.png" alt-text="Screenshot of adding tasks to templates."::: +1. On the **Review and create** tab, review the workflow's settings. You can also choose whether or not to enable the schedule for the workflow. Select **Create** to create the workflow. + :::image type="content" source="media/create-lifecycle-workflow/template-review.png" alt-text="Screenshot of reviewing and creating a workflow."::: > [!IMPORTANT]-> By default, a newly created workflow is disabled to allow for the testing of it first on smaller audiences. For more information about testing workflows before rolling them out to many users, see: [run an on-demand workflow](on-demand-workflow.md). +> By default, a newly created workflow is disabled to allow for the testing of it first on smaller audiences. For more information about testing workflows before rolling them out to many users, see [Run an on-demand workflow](on-demand-workflow.md). -## Create a workflow using Microsoft Graph +## Create a lifecycle workflow by using Microsoft Graph -To create a workflow using Microsoft Graph API, see [Create workflow (lifecycle workflow)](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows) +To create a lifecycle workflow by using the Microsoft Graph API, see [Create workflow](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows). ## Next steps - [Manage a workflow's properties](manage-workflow-properties.md)-- [Manage Workflow Versions](manage-workflow-tasks.md)+- [Manage workflow versions](manage-workflow-tasks.md) |
active-directory | Delete Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md | Title: 'Delete a Lifecycle workflow' -description: Describes how to delete a Lifecycle Workflow using. + Title: Delete a lifecycle workflow +description: Learn how to delete a lifecycle workflow. -# Delete a Lifecycle workflow (Preview) +# Delete a lifecycle workflow (preview) -You can remove workflows that are no longer needed. Deleting these workflows allows you to make sure your lifecycle strategy is up to date. When a workflow is deleted, it enters a soft delete state. During this period, it's still able to be viewed within the deleted workflows list, and can be restored if needed. 30 days after a workflow enters a soft delete state it will be permanently removed. If you don't wish to wait 30 days for a workflow to permanently delete you can always manually delete it yourself. +You can remove workflows that you no longer need. Deleting these workflows helps keep your lifecycle strategy up to date. ++When a workflow is deleted, it enters a soft-delete state. During this period, you can still view it in the list of deleted workflows and restore it if needed. A workflow is permanently removed 30 days after it enters a soft-delete state. If you don't want to wait 30 days for a workflow to be permanently deleted, you can manually delete it. ## Prerequisites -The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements). +The preview of lifecycle workflows requires Azure Active Directory (Azure AD) Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements). -## Delete a workflow using the Azure portal +## Delete a workflow by using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Type in **Identity Governance** on the search bar near the top of the page and select it. --1. In the left menu, select **Lifecycle Workflows (Preview)**. +1. On the search bar near the top of the page, enter **Identity Governance**. Then select **Identity Governance** in the results. -1. select **Workflows (Preview)**. +1. On the left menu, select **Lifecycle Workflows (Preview)**. -1. On the workflows screen, select the workflow you want to delete. +1. Select **Workflows (Preview)**. - :::image type="content" source="media/delete-lifecycle-workflow/delete-button.png" alt-text="Screenshot of list of Workflows to delete."::: +1. On the **Workflows** page, select the workflow that you want to delete. Then select **Delete**. -1. With the workflow highlighted, select **Delete**. + :::image type="content" source="media/delete-lifecycle-workflow/delete-button.png" alt-text="Screenshot of a list of workflows with one selected, along with the Delete button."::: -1. Confirm you want to delete the selected workflow. - - :::image type="content" source="media/delete-lifecycle-workflow/delete-workflow.png" alt-text="Screenshot of confirming to delete a workflow."::: +1. Confirm that you want to delete the workflow by selecting the **Delete** button. -## View deleted workflows + :::image type="content" source="media/delete-lifecycle-workflow/delete-workflow.png" alt-text="Screenshot of confirming the deletion of a workflow."::: -After deleting workflows, you can view them on the **Deleted Workflows (Preview)** page. +## View deleted workflows in the Azure portal +After you delete workflows, you can view them on the **Deleted workflows** page. -1. On the left of the screen, select **Deleted Workflows (Preview)**. +1. On the left pane, select **Deleted workflows (Preview)**. -1. On this page, you'll see a list of deleted workflows, a description of the workflow, what date it was deleted, and its permanent delete date. By default the permanent delete date for a workflow is always 30 days after it was originally deleted. +1. On the **Deleted workflows** page, check the list of deleted workflows. Each workflow has a description, the date of deletion, and a permanent delete date. By default, the permanent delete date for a workflow is 30 days after it was originally deleted. - :::image type="content" source="media/delete-lifecycle-workflow/deleted-list.png" alt-text="Screenshot of a list of deleted workflows."::: - -1. To restore a deleted workflow, select the workflow you want to restore and select **Restore workflow**. + :::image type="content" source="media/delete-lifecycle-workflow/deleted-list.png" alt-text="Screenshot of a list of deleted workflows."::: -1. To permanently delete a workflow immediately, you select the workflow you want to delete from the list, and select **Delete permanently**. +1. To restore a deleted workflow, select it and then select **Restore workflow**. + To permanently delete a workflow immediately, select it and then select **Delete permanently**. - +## Delete a workflow by using Microsoft Graph -## Delete a workflow using Microsoft Graph +To delete a workflow by using an API via Microsoft Graph, see [Delete a lifecycle workflow](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta&preserve-view=true). -To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta&preserve-view=true). +## View deleted workflows by using Microsoft Graph -## View deleted workflows using Microsoft Graph +To view a list of deleted workflows by using an API via Microsoft Graph, see [List deleted workflows](/graph/api/identitygovernance-lifecycleworkflowscontainer-list-deleteditems). -To View a list of deleted workflows using API via Microsoft Graph, see: [List deleted workflows](/graph/api/identitygovernance-lifecycleworkflowscontainer-list-deleteditems). +## Permanently delete a workflow by using Microsoft Graph -## Permanently delete a workflow using Microsoft Graph +To permanently delete a workflow by using an API via Microsoft Graph, see [Permanently delete a deleted workflow](/graph/api/identitygovernance-deleteditemcontainer-delete). -To permanently delete a workflow using API via Microsoft Graph, see: [Permanently delete a deleted workflow](/graph/api/identitygovernance-deleteditemcontainer-delete) +## Restore a deleted workflow by using Microsoft Graph -## Restore deleted workflows using Microsoft Graph +To restore a deleted workflow by using an API via Microsoft Graph, see [Restore a deleted workflow](/graph/api/identitygovernance-workflow-restore). -To restore a deleted workflow using API via Microsoft Graph, see: [Restore a deleted workflow](/graph/api/identitygovernance-workflow-restore) > [!NOTE]-> Permanently deleted workflows are not able to be restored. +> You can't restore permanently deleted workflows. ## Next steps -- [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md)-- [Manage Lifecycle Workflow Versions](manage-workflow-tasks.md)+- [What are lifecycle workflows?](what-are-lifecycle-workflows.md) +- [Manage lifecycle workflow versions](manage-workflow-tasks.md) |
active-directory | Entitlement Management Access Package Auto Assignment Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md | -During this preview, you can have at most one automatic assignment policy in an access package. +You can have at most one automatic assignment policy in an access package, and the policy can only be created by an administrator. This article describes how to create an access package automatic assignment policy for an existing access package. You'll need to have attributes populated on the users who will be in scope for b To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package. -**Prerequisite role:** Global administrator, Identity Governance administrator, Catalog owner, or Access package manager +**Prerequisite role:** Global administrator or Identity Governance administrator 1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. |
active-directory | Lifecycle Workflow Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md | -Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you're able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. By calling out to the external systems, you're able to accomplish things, which can extend the purpose of your workflows. When a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](../../logic-apps/logic-apps-overview.md). +Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you're able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. For example, when a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](../../logic-apps/logic-apps-overview.md). -## Prerequisite Logic App roles required for integration with the custom task extension +## Logic Apps prerequisites -When you link your Azure Logic App with the custom task extension task, there are certain prerequisites that must be completed before the link can be established. +To link a Azure Logic App with a custom task extension, the following prerequisites must be available: -To create a Logic App, you must have: +- An Azure subscription +- A resource group +- Permissions to create a new consumption-based Logic App or access to an existing consumption-based Logic App -- A valid Azure subscription-- A compatible resource group where the Logic App is located--> [!NOTE] -> The resource group needs permissions to create, update, and read the Logic App while the custom extension is being created. --The roles on the Azure Logic App required with the custom task extension, are as follows: +One of the following Azure role assignments is required either on the Logic App itself or on a higher scope such as the resource group, subscription or management group: - **Logic App contributor** - **Contributor** - **Owner** > [!NOTE]-> The **Logic App Operator** role alone will not work with the custom task extension. For more information on the required **Logic App contributor** role, see: [Logic App Contributor](../../role-based-access-control/built-in-roles.md#logic-app-contributor). +> The **Logic App Operator** role is not sufficient. ## Custom task extension deployment scenarios When creating custom task extensions, the scenarios for how it interacts with Li :::image type="content" source="media/lifecycle-workflow-extensibility/task-extension-deployment-scenarios.png" alt-text="Screenshot of custom task deployment scenarios."::: - **Launch and continue** - The Azure Logic App is started, and the following task execution immediately continues with no response expected from the Azure Logic App. This scenario is best suited if the Lifecycle workflow doesn't require any feedback (including status) from the Azure Logic App. If the Logic App is started successfully, the Lifecycle Workflow task is considered a success.-- **Launch and wait** - The Azure Logic App is started, and the following task's execution waits on the response from the Logic App. You enter a time duration for how long the custom task extension should wait for a response from the Azure Logic App. If no response is received within a customer defined duration window, the task is considered failed.+- **Launch and wait** - The Azure Logic App is started, and the following task's execution waits on the response from the Logic App. You enter a time duration for how long the custom task extension should wait for a response from the Azure Logic App. If no response is received within the defined duration window, the task is considered failed. :::image type="content" source="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png" alt-text="Screenshot of custom task launch and wait task choice." lightbox="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png"::: > [!NOTE]-> You can also deploy a custom task that calls to a third party system. To learn more about this call, see: [taskProcessingResult: resume](/graph/api/identitygovernance-taskprocessingresult-resume). +> The response does not necessarily have to be provided by the Logic App, a third party system is able to respond if the Logic App only acts as an intermediary. To learn more about this, see: [taskProcessingResult: resume](/graph/api/identitygovernance-taskprocessingresult-resume). + ## Response authorization -When you create a custom task extension that waits for a response from the Logic App, you're able to define which applications can send a response +When you create a custom task extension that waits for a response from the Logic App, you're able to define which applications can send a response. :::image type="content" source="media/lifecycle-workflow-extensibility/launch-wait-options.png" alt-text="Screenshot of custom task extension launch and wait options."::: -Response authorization can be utilized in one of the following ways: +The response can be authorized in one of the following ways: -- **System-assigned managed identity (Default)** - With this choice you Enable and utilize the Logic Apps system-assigned managed identity. For more information, see: [Authenticate access to Azure resources with managed identities in Azure Logic Apps](/azure/logic-apps/create-managed-service-identity)-- **No authorization** - With this choice you assign a Logic App or third party application an application permission (LifecycleWorkflows.ReadWrite.All), or role assignment (Lifecycle Workflows Administrator). This choice doesn't follow least privilege access as outlined in Azure Active Directory best practices. For more information on best practices for roles, see: [Best Practices for Azure AD roles](/azure/active-directory/roles/best-practices).-- **Existing application** - With this choice you're able to choose an existing application to respond. You are able to choose applications that are user-assigned or regular applications. For more information on managed identity types, see: [Managed identity types](../managed-identities-azure-resources/overview.md#managed-identity-types).+- **System-assigned managed identity (Default)** - With this choice you enable and utilize the Logic Apps system-assigned managed identity. For more information, see: [Authenticate access to Azure resources with managed identities in Azure Logic Apps](/azure/logic-apps/create-managed-service-identity) +- **No authorization** - With this choice no authorization will be granted, and you separately have to assign an application permission (LifecycleWorkflows.ReadWrite.All), or role assignment (Lifecycle Workflows Administrator). If an application is responding we do not recommend this option, as it is not following the principle of least privilege. This option may also be used if responses are only provided on behalf of a user (LifecycleWorkflows.ReadWrite.All delegated permission AND Lifecycle Workflows Administrator role assignment) +- **Existing application** - With this choice you're able to choose an existing application to respond. This can be a regular application as well as a system or user-assigned managed identity. For more information on managed identity types, see: [Managed identity types](../managed-identities-azure-resources/overview.md#managed-identity-types). ## Custom task extension integration with Azure Logic Apps high-level steps The high-level steps for the Azure Logic Apps integration are as follows: > [!NOTE]-> Creating a custom task extension and logic app through the workflows page in the Azure portal will automate most of these steps. For a guide on creating a custom task extension this way, see: [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md). +> Creating a custom task extension and logic app through the Azure portal will automate most of these steps. For a guide on creating a custom task extension this way, see: [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md). - **Create a consumption-based Azure Logic App**: A consumption-based Azure Logic App that is used to be called to from the custom task extension.-- **Configure the Azure Logic App so its compatible with Lifecycle workflows**: Configuring the consumption-based Azure Logic App so that it can be used with the custom task extension.+- **Configure the Azure Logic App so its compatible with Lifecycle workflows**: Configuring the consumption-based Azure Logic App so that it can be used with the custom task extension. For more information, see: [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md) - **Build your custom business logic within your Azure Logic App**: Set up your business logic within the Azure Logic App using Logic App designer. - **Create a lifecycle workflow customTaskExtension which holds necessary information about the Azure Logic App**: Creating a custom task extension that references the configured Azure Logic App. - **Update or create a Lifecycle workflow with the ΓÇ£Run a custom task extensionΓÇ¥ task, referencing your created customTaskExtension**: Adding the newly created custom task extension to a new workflow, or updating the information to an existing workflow. |
active-directory | Lifecycle Workflow Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md | Last updated 01/26/2023 # Lifecycle Workflow built-in tasks (Preview) -Lifecycle Workflows come with many pre-configured tasks that are designed to automate common lifecycle management scenarios. These built-in tasks can be utilized to make customized workflows to suit your organization's needs. These tasks can be configured within seconds to create new workflows. These tasks also have categories based on the Joiner-Mover-Leaver model so that they can be easily placed into workflows based on need. In this article you'll get the complete list of tasks, information on common parameters each task has, and a list of unique parameters needed for each specific task. +Lifecycle Workflows come with many pre-configured tasks that are designed to automate common lifecycle management scenarios. These built-in tasks can be utilized to make customized workflows to suit your organization's needs. These tasks can be configured within seconds to create new workflows. These tasks also have categories based on the Joiner-Mover-Leaver model so that they can be easily placed into workflows based on need. In this article you get the complete list of tasks, information on common parameters each task has, and a list of unique parameters needed for each specific task. ## Supported tasks Common task parameters are the non-unique parameters contained in every task. Wh ||| |category | A read-only string that identifies the category or categories of the task. Automatically determined when the taskDefinitionID is chosen. | |taskDefinitionId | A string referencing a taskDefinition that determines which task to run. |-|isEnabled | A boolean value that denotes whether the task is set to run or not. If set to ΓÇ£true" then the task will run. Defaults to true. | +|isEnabled | A boolean value that denotes whether the task is set to run or not. If set to ΓÇ£true" then the task runs. Defaults to true. | |displayName | A unique string that identifies the task. | |description | A string that describes the purpose of the task for administrative use. (Optional) |-|executionSequence | A read-only integer that states in what order the task will run in a workflow. For more information about executionSequence and workflow order, see: [Configure Scope](understanding-lifecycle-workflows.md#configure-scope). | +|executionSequence | A read-only integer that states in what order the task runs in a workflow. For more information about executionSequence and workflow order, see: [Configure Scope](understanding-lifecycle-workflows.md#configure-scope). | |continueOnError | A boolean value that determines if the failure of this task stops the subsequent workflows from running. | |arguments | Contains unique parameters relevant for the given task. | Emails, sent from tasks, are able to be customized. If you choose to customize t - **Subject:** Customizes the subject of emails. - **Message body:** Customizes the body of the emails being sent out.-- **Email language translation:** Overrides the email recipient's language settings. Custom text is not customized, and it is recommended to set this language to the same language as the custom text. +- **Email language translation:** Overrides the email recipient's language settings. Custom text isn't customized, and it's recommended to set this language to the same language as the custom text. :::image type="content" source="media/lifecycle-workflow-task/customize-email-concept.png" alt-text="Screenshot of the customization email options."::: The Azure AD prerequisite to run the **Send welcome email to new hire** task is: - A populated mail attribute for the user. -For Microsoft Graph the parameters for the **Send welcome email to new hire** task are as follows: +For Microsoft Graph, the parameters for the **Send welcome email to new hire** task are as follows: |Parameter |Definition | ||| The Azure AD prerequisite to run the **Send onboarding reminder email** task is: - A populated manager's mail attribute for the user. -For Microsoft Graph the parameters for the **Send onboarding reminder email** task are as follows: +For Microsoft Graph, the parameters for the **Send onboarding reminder email** task are as follows: |Parameter |Definition | ||| The Azure AD prerequisites to run the **Generate Temporary Access Pass and send > [!IMPORTANT] > A user having this task run for them in a workflow must also not have any other authentication methods, sign-ins, or AAD role assignments for this task to work for them. -For Microsoft Graph the parameters for the **Generate Temporary Access Pass and send via email to user's manager** task are as follows: +For Microsoft Graph, the parameters for the **Generate Temporary Access Pass and send via email to user's manager** task are as follows: |Parameter |Definition | ||| For Microsoft Graph the parameters for the **Generate Temporary Access Pass and ### Add user to groups -Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). +Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups aren't supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). You're able to customize the task name and description for this task. :::image type="content" source="media/lifecycle-workflow-task/add-group-task.png" alt-text="Screenshot of Workflows task: Add user to group task."::: -For Microsoft Graph the parameters for the **Add user to groups** task are as follows: +For Microsoft Graph, the parameters for the **Add user to groups** task are as follows: |Parameter |Definition | ||| You're able to add a user to an existing static team. You're able to customize t :::image type="content" source="media/lifecycle-workflow-task/add-team-task.png" alt-text="Screenshot of Workflows task: add user to team."::: -For Microsoft Graph the parameters for the **Add user to teams** task are as follows: +For Microsoft Graph, the parameters for the **Add user to teams** task are as follows: |Parameter |Definition | ||| For Microsoft Graph the parameters for the **Add user to teams** task are as fol ### Enable user account -Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal. +Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments aren't supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/enable-task.png" alt-text="Screenshot of Workflows task: enable user account."::: -For Microsoft Graph the parameters for the **Enable user account** task are as follows: +For Microsoft Graph, the parameters for the **Enable user account** task are as follows: |Parameter |Definition | ||| The Azure AD prerequisite to run the **Run a Custom Task Extension** task is: - A Logic App that is compatible with the custom task extension. For more information, see: [Lifecycle workflow extensibility](lifecycle-workflow-extensibility.md). -For Microsoft Graph the parameters for the **Run a Custom Task Extension** task are as follows: +For Microsoft Graph, the parameters for the **Run a Custom Task Extension** task are as follows: |Parameter |Definition | ||| For more information on setting up a Logic app to run with Lifecycle Workflows, ### Disable user account -Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal. +Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments aren't supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/disable-task.png" alt-text="Screenshot of Workflows task: disable user account."::: -For Microsoft Graph the parameters for the **Disable user account** task are as follows: +For Microsoft Graph, the parameters for the **Disable user account** task are as follows: |Parameter |Definition | ||| For Microsoft Graph the parameters for the **Disable user account** task are as ### Remove user from selected groups -Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). +Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups aren't supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). You're able to customize the task name and description for this task in the Azure portal. You're able to customize the task name and description for this task in the Azur -For Microsoft Graph the parameters for the **Remove user from selected groups** task are as follows: +For Microsoft Graph, the parameters for the **Remove user from selected groups** task are as follows: |Parameter |Definition | ||| For Microsoft Graph the parameters for the **Remove user from selected groups** ### Remove users from all groups -Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). +Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and role-assignable groups aren't supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). You're able to customize the task name and description for this task in the Azur :::image type="content" source="media/lifecycle-workflow-task/remove-all-groups-task.png" alt-text="Screenshot of Workflows task: remove user from all groups."::: -For Microsoft Graph the parameters for the **Remove users from all groups** task are as follows: +For Microsoft Graph, the parameters for the **Remove users from all groups** task are as follows: |Parameter |Definition | ||| For Microsoft Graph the parameters for the **Remove users from all groups** task Allows a user to be removed from one or multiple static teams. You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-user-team-task.png" alt-text="Screenshot of Workflows task: remove user from teams."::: -For Microsoft Graph the parameters for the **Remove User from Teams** task are as follows: +For Microsoft Graph, the parameters for the **Remove User from Teams** task are as follows: |Parameter |Definition | ||| For Microsoft Graph the parameters for the **Remove User from Teams** task are a Allows users to be removed from every static team they're a member of. You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-user-all-team-task.png" alt-text="Screenshot of Workflows task: remove user from all teams."::: -For Microsoft Graph the parameters for the **Remove users from all teams** task are as follows: +For Microsoft Graph, the parameters for the **Remove users from all teams** task are as follows: |Parameter |Definition | ||| Allows all direct license assignments to be removed from a user. For group-based You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-license-assignment-task.png" alt-text="Screenshot of Workflows task: remove all licenses from users."::: -For Microsoft Graph the parameters for the **Remove all license assignment from user** task are as follows: +For Microsoft Graph, the parameters for the **Remove all license assignment from user** task are as follows: |Parameter |Definition | ||| For Microsoft Graph the parameters for the **Remove all license assignment from ### Delete User -Allows cloud-only user accounts to be deleted. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal. +Allows cloud-only user accounts to be deleted. Users with Azure AD role assignments aren't supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/delete-user-task.png" alt-text="Screenshot of Workflows task: Delete user account."::: -For Microsoft Graph the parameters for the **Delete User** task are as follows: +For Microsoft Graph, the parameters for the **Delete User** task are as follows: |Parameter |Definition | ||| The Azure AD prerequisite to run the **Send email on user last day** task are: - A populated manager attribute for the user. - A populated manager's mail attribute for the user. -For Microsoft Graph the parameters for the **Send email on user last day** task are as follows: +For Microsoft Graph, the parameters for the **Send email on user last day** task are as follows: |Parameter |Definition | ||| The Azure AD prerequisite to run the **Send email to users manager after their l - A populated manager's mail attribute for the user. -For Microsoft Graph the parameters for the **Send email to users manager after their last day** task are as follows: +For Microsoft Graph, the parameters for the **Send email to users manager after their last day** task are as follows: |Parameter |Definition | ||| |
active-directory | Tutorial Offboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md | Title: 'Execute employee off-boarding tasks in real-time on their last day of work with Azure portal (preview)' -description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Azure portal (preview). + Title: Execute employee termination tasks by using lifecycle workflows (preview) +description: Learn how to remove users from an organization in real time on their last day of work by using lifecycle workflows (preview) in the Azure portal. -# Execute employee off-boarding tasks in real-time on their last day of work with Azure portal (preview) +# Execute employee termination tasks by using lifecycle workflows (preview) -This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the Azure portal. +This tutorial provides a step-by-step guide on how to execute a real-time employee termination by using lifecycle workflows (preview) in the Azure portal. -This off-boarding scenario runs a workflow on-demand and accomplishes the following tasks: - -1. Remove user from all groups -2. Remove user from all Teams -3. Delete user account +This *leaver* scenario runs a workflow on demand and accomplishes the following tasks: -You may learn more about running a workflow on-demand [here](on-demand-workflow.md). +- Remove the user from all groups. +- Remove the user from all Microsoft Teams memberships. +- Delete the user account. ++For more information, see [Run a workflow on demand](on-demand-workflow.md). ## Prerequisites -The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements). +The preview of lifecycle workflows requires Azure Active Directory (Azure AD) Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements). ++## Before you begin ++As part of the prerequisites for completing this tutorial, you need an account that has group and Teams memberships and that can be deleted during the tutorial. For comprehensive instructions on how to complete these prerequisite steps, see [Prepare user accounts for lifecycle workflows](tutorial-prepare-azure-ad-user-accounts.md). ++The leaver scenario includes the following steps: ++1. Prerequisite: Create a user account that represents an employee leaving your organization. +1. Prerequisite: Prepare the user account with group and Teams memberships. +1. Create the lifecycle management workflow. +1. Run the workflow on demand. +1. Verify that the workflow was successfully executed. ++## Create a workflow by using the leaver template ++Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination by using lifecycle workflows in the Azure portal: ++1. Sign in to the [Azure portal](https://portal.azure.com). +2. On the right, select **Azure Active Directory**. +3. Select **Identity Governance**. +4. Select **Lifecycle workflows (Preview)**. +5. On the **Overview** tab, select **New workflow**. ++ :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of the Overview tab and the button for creating a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: ++6. From the collection of templates, choose **Select** under **Real-time employee termination**. ++ :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting a workflow template for real-time employee termination." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: +7. Configure basic information about the workflow, and then select **Next: Review tasks**. -## Before you begin + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of the tab for basic workflow information." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png"::: -As part of the prerequisites for completing this tutorial, you need an account that has group and Teams memberships and that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). +8. Inspect the tasks if you want, but no additional configuration is needed. Select **Next: Select users** when you're finished. -The leaver scenario can be broken down into the following: -- **Prerequisite:** Create a user account that represents an employee leaving your organization-- **Prerequisite:** Prepare the user account with groups and Teams memberships-- Create the lifecycle management workflow-- Run the workflow on-demand-- Verify that the workflow was successfully executed+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-tasks.png" alt-text="Screenshot of the tab for reviewing template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-tasks.png"::: -## Create a workflow using leaver template -Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination with Lifecycle workflows using the Azure portal. +9. Choose the **Select users to run now** option. It allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on demand later at any time, as needed. - 1. Sign in to Azure portal - 2. On the right, select **Azure Active Directory**. - 3. Select **Identity Governance**. - 4. Select **Lifecycle workflows (Preview)**. - 5. On the **Overview (Preview)** page, select **New workflow**. - :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-users.png" alt-text="Screenshot of the option for selecting users to run now." lightbox="media/tutorial-lifecycle-workflows/real-time-users.png"::: - 6. From the templates, select **Select** under **Real-time employee termination**. - :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting template leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: +10. Select **Add users** to designate the users for this workflow. - 7. Next, you configure the basic information about the workflow. Select **Next:Review tasks** when you're done with this step. - :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of review template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of the button for adding users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png"::: - 8. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you're finished. - :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-tasks.png" alt-text="Screenshot of template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-tasks.png"::: +11. A panel with the list of available users appears on the right side of the window. Choose **Select** when you're done with your selection. - 9. For the user selection, select **Select users**. This allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on-demand later at any time as needed. - :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-users.png" alt-text="Select real time leaver template users." lightbox="media/tutorial-lifecycle-workflows/real-time-users.png"::: - - 10. Next, select on **+Add users** to designate the users to be executed on this workflow. - :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of real time leaver add users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png"::: - - 11. A panel with the list of available users pops up on the right side of the screen. Select **Select** when you're done with your selection. - :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of real time leaver template selected users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of a list of available users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png"::: - 12. Select **Next: Review and create** when you're satisfied with your selection. - :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-review-users.png" alt-text="Screenshot of reviewing template users." lightbox="media/tutorial-lifecycle-workflows/real-time-review-users.png"::: +12. Select **Next: Review and create** when you're satisfied with your selection of users. - 13. On the review blade, verify the information is correct and select **Create**. - :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of creating real time leaver workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-review-users.png" alt-text="Screenshot of added users." lightbox="media/tutorial-lifecycle-workflows/real-time-review-users.png"::: -## Run the workflow -Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. +13. Verify that the information is correct, and then select **Create**. ->[!NOTE] ->Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of the tab for reviewing workflow choices, along with the button for creating the workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png"::: -To run a workflow on-demand, for users using the Azure portal, do the following steps: +## Run the workflow ++Now that you've created the workflow, it will automatically run every three hours. Lifecycle workflows check every three hours for users in the associated execution condition and execute the configured tasks for those users. ++To run the workflow immediately, you can use the on-demand feature. ++> [!NOTE] +> You currently can't run a workflow on demand if it's set to **Disabled**. You need to set the workflow to **Enabled** to use the on-demand feature. ++To run a workflow on demand for users by using the Azure portal: ++1. On the workflow screen, select the specific workflow that you want to run. +2. Select **Run on demand**. +3. On the **Select users** tab, select **Add users**. +4. Add users. +5. Select **Run workflow**. - 1. On the workflow screen, select the specific workflow you want to run. - 2. Select **Run on demand**. - 3. On the **select users** tab, select **add users**. - 4. Add a user. - 5. Select **Run workflow**. - ## Check tasks and workflow status -At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we look at the status using the user focused reports. +At any time, you can monitor the status of workflows and tasks. Three data pivots, users runs, and tasks are currently available in public preview. You can learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In this tutorial, you check the status by using the user-focused reports. ++1. On the **Overview** page for the workflow, select **Workflow history (Preview)**. - 1. To begin, select the **Workflow history (Preview)** tab to view the user summary and associated workflow tasks and statuses. - :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-real-time.png" alt-text="Screenshot of real time history overview." lightbox="media/tutorial-lifecycle-workflows/workflow-history-real-time.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-real-time.png" alt-text="Screenshot of the overview page for a workflow." lightbox="media/tutorial-lifecycle-workflows/workflow-history-real-time.png"::: -1. Once the **Workflow history (Preview)** tab has been selected, you land on the workflow history page as shown. - :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-real-time.png" alt-text="Screenshot of real time workflow history." lightbox="media/tutorial-lifecycle-workflows/user-summary-real-time.png"::: + The **Workflow history** page appears. -1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith. - :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-real-time.png" alt-text="Screenshot of total tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/total-tasks-real-time.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-real-time.png" alt-text="Screenshot of real-time workflow history." lightbox="media/tutorial-lifecycle-workflows/user-summary-real-time.png"::: -1. To add an extra layer of granularity, you may select **Failed tasks** for the user Wade Warren to view the total number of failed tasks assigned to the user Wade Warren. - :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png" alt-text="Screenshot of failed tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png"::: +1. Select **Total tasks** for a user to view the total number of tasks created and their statuses. -1. Similarly, you may select **Unprocessed tasks** for the user Wade Warren to view the total number of unprocessed or canceled tasks assigned to the user Wade Warren. - :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png" alt-text="Screenshot of unprocessed tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png"::: + :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-real-time.png" alt-text="Screenshot of total tasks for a real-time workflow." lightbox="media/tutorial-lifecycle-workflows/total-tasks-real-time.png"::: ++1. To add an extra layer of granularity, select **Failed tasks** for a user to view the total number of failed tasks assigned to that user. ++ :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png" alt-text="Screenshot of failed tasks for a real-time workflow." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png"::: ++1. Select **Unprocessed tasks** for a user to view the total number of unprocessed or canceled tasks assigned to that user. ++ :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png" alt-text="Screenshot of unprocessed tasks for a real-time workflow." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png"::: ## Next steps-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Complete employee offboarding tasks in real-time on their last day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow)++- [Prepare user accounts for lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md) +- [Complete tasks in real time on an employee's last day of work by using lifecycle workflow APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow) |
active-directory | What Are Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md | Title: 'What are lifecycle workflows?' -description: Describes overview of Lifecycle workflow feature. + Title: What are lifecycle workflows? +description: Get an overview of the lifecycle workflow feature of Azure AD. -# What are Lifecycle Workflows? (Public Preview) +# What are lifecycle workflows (preview)? -Lifecycle Workflows is a new Identity Governance service that enables organizations to manage Azure AD users by automating these three basic lifecycle processes: +Lifecycle workflows (preview) are a new identity governance feature that enables organizations to manage Azure Active Directory (Azure AD) users by automating these three basic lifecycle processes: -- Joiner - When an individual comes into scope of needing access. An example is a new employee joining a company or organization.-- Mover - When an individual moves between boundaries within an organization. This movement may require more access or authorization. An example would be a user who was in marketing is now a member of the sales organization.-- Leaver - When an individual leaves the scope of needing access, access may need to be removed. Examples would be an employee who is retiring or an employee who has been terminated.+- **Joiner**: When an individual enters the scope of needing access. An example is a new employee joining a company or organization. +- **Mover**: When an individual moves between boundaries within an organization. This movement might require more access or authorization. An example is a user who was in marketing and is now a member of the sales organization. +- **Leaver**: When an individual leaves the scope of needing access. This movement might require the removal of access. Examples are an employee who's retiring or an employee who's terminated. -Workflows contain specific processes, which run automatically against users as they move through their life cycle. Workflows are made up of [Tasks](lifecycle-workflow-tasks.md) and [Execution conditions](understanding-lifecycle-workflows.md#understanding-lifecycle-workflows). +Workflows contain specific processes that run automatically against users as they move through their lifecycle. Workflows consist of [tasks](lifecycle-workflow-tasks.md) and [execution conditions](understanding-lifecycle-workflows.md#understanding-lifecycle-workflows). -Tasks are specific actions that run automatically when a workflow is triggered. An Execution condition defines the 'Scope' of "who" and the 'Trigger' of "when" a workflow will be performed. For example, sending a manager an email 7 days before the value in the NewEmployeeHireDate attribute of new employees can be described as a workflow. It consists of: - - Task: send email - - When (trigger): Seven days before the NewEmployeeHireDate attribute value - - Who (scope): new employees +Tasks are specific actions that run automatically when a workflow is triggered. An execution condition defines the scope of who's affected and the trigger of when a workflow will be performed. For example, sending a manager an email seven days before the value in the `NewEmployeeHireDate` attribute of new employees can be described as a workflow. It consists of: -Automatic workflow schedules [trigger](understanding-lifecycle-workflows.md#trigger-details) off of user attributes. Scoping of automatic workflows is possible using a wide range of user and extended attributes; such as the "department" that a user belongs to. +- Task: Send email. +- Who (scope): New employees. +- When (trigger): Seven days before the `NewEmployeeHireDate` attribute value. -Finally, Lifecycle Workflows can even [integrate with Logic Apps](lifecycle-workflow-extensibility.md) tasks ability to extend workflows for more complex scenarios using your existing Logic apps. +An automatic workflow schedules a [trigger](understanding-lifecycle-workflows.md#trigger-details) based on user attributes. Scoping of automatic workflows is possible through a wide range of user and extended attributes, such as the department that a user belongs to. +Lifecycle workflows can even [integrate with the ability of logic apps tasks to extend workflows](lifecycle-workflow-extensibility.md) for more complex scenarios through your existing logic apps. - :::image type="content" source="media/what-are-lifecycle-workflows/intro-2.png" alt-text="Lifecycle Workflows diagram." lightbox="media/what-are-lifecycle-workflows/intro-2.png"::: +## Why to use lifecycle workflows -## Why use Lifecycle workflows? -Anyone who wants to modernize their identity lifecycle management process for employees, needs to ensure: +Anyone who wants to modernize an identity lifecycle management process for employees needs to ensure: - - **New employee on-boarding** - That when a user joins the organization, they're ready to go on day one. They have the correct access to the information, membership to groups, and applications they need. - - **Employee retirement/terminations/off-boarding** - That users who are no longer tied to the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner. - - **Easy to administer in my organization** - That there's a seamless process to accomplish the above tasks, that isn't overly burdensome or time consuming for Administrators. - - **Robust troubleshooting/auditing/compliance** - That there's the ability to easily troubleshoot issues when they arise and that there's sufficient logging to help with this and compliance related issues. +- That when users join the organization, they're ready to go on day one. They have the correct access to information, group memberships, and applications that they need. +- That users who are no longer tied to the company for various reasons (termination, separation, leave of absence, or retirement) have their access revoked in a timely way. +- That the process for providing or revoking access isn't overly burdensome or time consuming for administrators. +- That administrators and employees can easily troubleshoot problems, and that logging is sufficient to help with troubleshooting, auditing, and compliance. -The following are key reasons to use Lifecycle workflows. -- **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks. -- **Centralize** your workflow process so you can easily create and manage workflows all in one location.-- Easily **troubleshoot** workflow scenarios with the Workflow history and Audit logs.-- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles are reduced.-- **Reduce** or remove manual tasks that were done in the past with automated lifecycle workflows.-- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps.+Key reasons to use lifecycle workflows include: +- Extend your HR-driven provisioning process with other workflows that simplify and automate tasks. +- Centralize your workflow process so you can easily create and manage workflows in one location. +- Easily troubleshoot workflow scenarios with the workflow history and audit logs. +- Manage user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles decreases. +- Reduce or remove manual tasks. +- Apply logic apps to extend workflows for more complex scenarios with your existing logic apps. -All of the above can help ensure a holistic experience by allowing you to remove other dependencies and applications to achieve the same result. Thus translating into, increased on-boarding and off-boarding efficiency. +Those capabilities can help ensure a holistic experience by allowing you to remove other dependencies and applications to achieve the same result. You can then increase efficiency in new employee orientation and in removal of former employees from the system. +## When to use lifecycle workflows -## When to use Lifecycle Workflows -You can use Lifecycle workflows to address any of the following conditions. -- **Automating and extending user onboarding/HR provisioning** - Use Lifecycle workflows when you want to extend your HR provisioning scenarios by automating tasks such as generating temporary passwords and emailing managers. If you currently have a manual process for on-boarding, use Lifecycle workflows as part of an automated process.-- **Automate group membership**: When groups in your organization are well-defined, you can automate user membership of these groups. Some of the benefits and differences from dynamic groups include:- - LCW manages static groups, where a dynamic group rule isn't needed - - No need to have one rule per group – the LCW rule determines the set/scope of users to execute workflows against not which group - - LCW helps manage users ‘ lifecycle beyond attributes supported in dynamic groups – for example, ‘X’ days before the employeeHireDate - - LCW can perform actions on the group not just the membership. -- **Workflow history and auditing** Use Lifecycle workflows when you need to create an audit trail of user lifecycle processes. Using the portal you can view history and audits for on-boarding and off-boarding scenarios.-- **Automate user account management**: Making sure users who are leaving have their access to resources revoked is a key part of the identity lifecycle process. Lifecycle Workflows allow you to automate the disabling and removal of user accounts.-- **Integrate with Logic Apps**: Ability to apply logic apps to extend workflows for more complex scenarios using your existing Logic apps.+You can use lifecycle workflows to address any of the following conditions: ++- **Automating and extending user orientation and HR provisioning**: Use lifecycle workflows when you want to extend your HR provisioning scenarios by automating tasks such as generating temporary passwords and emailing managers. If you currently have a manual process for orientation, use lifecycle workflows as part of an automated process. +- **Automating group membership**: When groups in your organization are well defined, you can automate user membership in those groups. Benefits and differences from dynamic groups include: + - Lifecycle workflows manage static groups, where you don't need a dynamic group rule. + - There's no need to have one rule per group. Lifecycle workflow rules determine the scope of users to execute workflows against, not which group. + - Lifecycle workflows help manage users' lifecycle beyond attributes supported in dynamic groups--for example, a certain number of days before the `NewEmployeeHireDate` attribute value. + - Lifecycle workflows can perform actions on the group, not just the membership. +- **Workflow history and auditing**: Use lifecycle workflows when you need to create an audit trail of user lifecycle processes. By using the Azure portal, you can view history and audits for orientation and departure scenarios. +- **Automating user account management**: A key part of the identity lifecycle process is making sure that users who are leaving have their access to resources revoked. You can use lifecycle workflows to automate the disabling and removal of user accounts. +- **Integrating with logic apps**: You can apply logic apps to extend workflows for more complex scenarios. ## License requirements [!INCLUDE [Azure AD Premium P2 license](../../../includes/lifecycle-workflows-license.md)] +During this preview, you can: -### How many licenses must you have? --To preview the Lifecycle Workflows feature, you must have an Azure AD Premium P2 license in your tenant. During this preview, you're able to: - - Create, manage, and delete workflows up to the total limit of 50 workflows. - Trigger on-demand and scheduled workflow execution. - Manage and configure existing tasks to create workflows that are specific to your needs. - Create up to 100 custom task extensions to be used in your workflows.- - ## Next steps-- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)-- [Create a Lifecycle workflow](create-lifecycle-workflow.md)++- [Create a custom workflow by using the Azure portal](tutorial-onboard-custom-workflow-portal.md) +- [Create a lifecycle workflow](create-lifecycle-workflow.md) |
active-directory | Add Application Portal Setup Oidc Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md | It is recommended that you use a non-production environment to test the steps in To configure OIDC-based SSO, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.+- One of the following roles: Global Administrator, or owner of the service principal. ## Add the application |
active-directory | Configure Admin Consent Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md | To enable the admin consent workflow and choose reviewers: 1. Select **Save**. It can take up to an hour for the workflow to become enabled. > [!NOTE]-> You can add or remove reviewers for this workflow by modifying the **Who can review admin consent requests** list. A current limitation of this feature is that a reviewer retains the ability to review requests that were made while they were designated as a reviewer. Additionally, new reviewers will not be assigned to requests that were created before they were set as a reviewer. +> You can add or remove reviewers for this workflow by modifying the **Who can review admin consent requests** list. A current limitation of this feature is that a reviewer retains the ability to review requests that were made while they were designated as a reviewer and will receive expiration reminder emails for those requests after they're removed from the reviewers list. Additionally, new reviewers will not be assigned to requests that were created before they were set as a reviewer. ## Configure the admin consent workflow using Microsoft Graph |
active-directory | Grant Admin Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md | To grant tenant-wide admin consent to an app listed in **Enterprise applications 1. Select **Azure Active Directory**, and then select **Enterprise applications**. 1. Select the application to which you want to grant tenant-wide admin consent, and then select **Permissions**. :::image type="content" source="media/grant-tenant-wide-admin-consent/grant-tenant-wide-admin-consent.png" alt-text="Screenshot shows how to grant tenant-wide admin consent.":::--1. Add the redirect **URI** (https://entra.microsoft.com/TokenAuthorize) as permitted redirect **URI** to the app. 1. Carefully review the permissions that the application requires. If you agree with the permissions the application requires, select **Grant admin consent**. ## Grant admin consent in App registrations |
active-directory | Cross Tenant Synchronization Configure Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md | These steps describe how to use Microsoft Graph Explorer (recommended), but you 1. In the target tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the target tenant and the source tenant. Use the source tenant ID in the request. + If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing configuration. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error). + **Request** ```http These steps describe how to use Microsoft Graph Explorer (recommended), but you 1. Use the [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?view=graph-rest-beta&preserve-view=true) API to enable user synchronization in the target tenant. + If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing policy. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error). + **Request** ```http These steps describe how to use Microsoft Graph Explorer (recommended), but you 1. In the source tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the source tenant and the target tenant. Use the target tenant ID in the request. + If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing configuration. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error). + **Request** ```http Either the signed-in user doesn't have sufficient privileges, or you need to con 2. In [Microsoft Graph Explorer tool](https://aka.ms/ge), make sure you consent to the required permissions. See [Step 1: Sign in to tenants and consent to permissions](#step-1-sign-in-to-tenants-and-consent-to-permissions) earlier in this article. +#### Symptom - Request_MultipleObjectsWithSameKeyValue error ++When you try to make a Graph API call, you receive an error message similar to the following: ++``` +code: Request_MultipleObjectsWithSameKeyValue +message: Another object with the same value for property tenantId already exists. +message: A conflicting object with one or more of the specified property values is present in the directory. +``` ++**Cause** ++You are likely trying to create a configuration or object that already exists, possibly from a previous configuration. ++**Solution** ++1. Verify your request syntax and that you are using the correct tenant ID. ++1. Make a `GET` request to list the existing object. ++1. If you have an existing object, instead of making a create request using `POST` or `PUT`, you might need to make an update request using `PATCH`, such as: ++ - [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) + - [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true) ++#### Symptom - Directory_ObjectNotFound error ++When you try to make a Graph API call, you receive an error message similar to the following: ++``` +code: Directory_ObjectNotFound +message: Unable to read the company information from the directory. +``` ++**Cause** ++You are likely trying to update an object that doesn't exist using `PATCH`. ++**Solution** ++1. Verify your request syntax and that you are using the correct tenant ID. ++1. Make a `GET` request to verify the object doesn't exist. ++1. If object doesn't exist, instead of making an update request using `PATCH`, you might need to make a create request using `POST` or `PUT`, such as: ++ - [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?view=graph-rest-beta&preserve-view=true) + ## Next steps - [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview?view=graph-rest-beta&preserve-view=true) |
active-directory | Groups Assign Member Owner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md | When a membership or ownership is assigned, the assignment: ## Assign an owner or member of a group -Follow these steps to make a user eligible member or owner of a group. You will need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. +Follow these steps to make a user eligible member or owner of a group. You will need permissions to manage groups. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role-assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not administrative unit level). ++> [!NOTE] +> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM. 1. [Sign in to the Azure portal](https://portal.azure.com). Follow these steps to make a user eligible member or owner of a group. You will ## Update or remove an existing role assignment -Follow these steps to update or remove an existing role assignment. You will need to have Global Administrator, Privileged Role Administrator role, or Owner role of the group. +Follow these steps to update or remove an existing role assignment. You will need permissions to manage groups. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role-assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not administrative unit level). ++> [!NOTE] +> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM. 1. [Sign in to the Azure portal](https://portal.azure.com) with appropriate role permissions. |
active-directory | Groups Discover Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md | Before you will start, you need an Azure AD Security group or Microsoft 365 grou Dynamic groups and groups synchronized from on-premises environment cannot be managed in PIM for Groups. -You should either be a group Owner, have Global Administrator role, or Privileged Role Administrator role to bring the group under management with PIM. +You need appropriate permissions to bring groups in Azure AD PIM. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role-assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not administrative unit level). ++> [!NOTE] +> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM. 1. [Sign in to the Azure portal](https://portal.azure.com). |
active-directory | Groups Renew Extend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md | Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part ## Who can extend and renew -Only Global Administrators, Privileged Role Administrators, or group owners can extend or renew group membership/ownership time-bound assignments. The affected user or group can request to extend assignments that are about to expire and request to renew assignments that are already expired. +Only users with permissions to manage groups can extend or renew group membership or ownership time-bound assignments. The affected user or group can request to extend assignments that are about to expire and request to renew assignments that are already expired. ++Role-assignable groups can be managed by Global Administrator, Privileged Role Administrator, or Owner of the group. Non-role-assignable groups can be managed by Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator, or Owner of the group. Role assignments for administrators should be scoped at directory level (not Administrative Unit level). ++> [!NOTE] +> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM. ## When notifications are sent |
active-directory | Groups Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md | -You need to have Global Administrator, Privileged Role Administrator, or group Owner permissions to manage settings for membership or ownership assignments of the group. Role settings are defined per role per group: all assignments for the same role (member or owner) for the same group follow same role settings. Role settings of one group are independent from role settings of another group. Role settings for one role (member) are independent from role settings for another role (owner). +You will need group management permissions to manage settings. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not Administrative Unit level). ++> [!NOTE] +> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM. ++Role settings are defined per role per group: all assignments for the same role (member or owner) for the same group follow same role settings. Role settings of one group are independent from role settings of another group. Role settings for one role (member) are independent from role settings for another role (owner). ## Update role settings |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | Azure Advanced Threat Protection | Monitor and respond to suspicious security ac [Microsoft 365 service health](/microsoft-365/enterprise/view-service-health) | View the health of Microsoft 365 services [Smart lockout](../authentication/howto-password-smart-lockout.md) | Define the threshold and duration for lockouts when failed sign-in events happen. [Password Protection](../authentication/concept-password-ban-bad.md) | Configure custom banned password list or on-premises password protection.+[Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) | Configure cross-tenant access settings for users in another tenant. Security Administrators can't directly create and delete users, but can indirectly create and delete synchronized users from another tenant when both tenants are configured for cross-tenant synchronization, which is a privileged permission. > [!div class="mx-tableFixed"] > | Actions | Description | |
active-directory | Howspace Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/howspace-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).-* A user account in Howspace with Admin permissions. +* A Howspace subscription with single sign-on and SCIM features enabled. +* A user account in Howspace with Main User Dashboard privileges. ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Howspace](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Howspace to support provisioning with Azure AD-Contact Howspace support to configure Howspace to support provisioning with Azure AD. +### Single sign-on configuration +1. Sign in to the Howspace Main User Dashboard, then select **Settings** from the menu. +1. In the settings list, select **single sign-on**. ++  ++1. Click the **Add SSO configuration** button. ++  ++1. Select either **Azure Active Directory (Multi-Tenant)** or **Azure Active Directory** based on your organization's Azure AD topology. ++  +  ++1. Enter your Azure AD Tenant ID, and click **OK** to save the configuration. ++### Provisioning configuration +1. In the settings list, select **System for Cross-domain Identity Management**. ++  ++1. Check the **Enable user synchronization** checkbox. +1. Copy the Tenant URL and Secret Token for later use in Azure AD. +1. Click **Save** to save the configuration. ++### Main user dashboard access control configuration +1. In the settings list, select **Main User Dashboard Access Control** ++  ++1. Check the **Enable single sign-on for main users** checkbox. +1. Select the SSO configuration you created in the previous step. +1. Enter the object IDs of the Azure AD user groups that should have access to the Main User Dashboard to the **Limit to following user groups** field. You can specify multiple groups by separating the object IDs with a comma. +1. Click **Save** to save the configuration. ++### Workspace default access control configuration +1. In the settings list, select **Workspace default settings** ++  ++1. In the Workspace default settings list, select **Login, registration and SSO** ++  ++1. Check the **Users can login using single sign-on** checkbox. +1. Select the SSO configuration you created in the previous step. +1. Enter the object IDs of the Azure AD user groups that should have access to workspaces to the **Limit to following user groups** field. You can specify multiple groups by separating the object IDs with a comma. +1. You can modify the user groups for each workspace individually after creating the workspace. ## Step 3. Add Howspace from the Azure AD application gallery This section guides you through the steps to configure the Azure AD provisioning |active|Boolean|| |name.givenName|String|| |name.familyName|String||- |phoneNumbers[type eq "work"].value|String|| + |phoneNumbers[type eq "mobile"].value|String|| |externalId|String|| 1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Howspace**. |
aks | Tutorial Kubernetes Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md | The following output example resembles successful creation of the resource group To install the aks-preview extension, run the following command: -```azurecli +```azurecli-interactive az extension add --name aks-preview ``` Run the following command to update to the latest version of the extension released: -```azurecli +```azurecli-interactive az extension update --name aks-preview ``` After a few minutes, the command completes and returns JSON-formatted informatio To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster and `-g`, the resource group name: -```bash +```azurecli-interactive export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)" ``` export FICID="fic-test-fic-name" Use the Azure CLI [az keyvault create][az-keyvault-create] command to create a Key Vault in the resource group created earlier. -```azurecli +```azurecli-interactive az keyvault create --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --name "${KEYVAULT_NAME}" ``` At this point, your Azure account is the only one authorized to perform any oper To add a secret to the vault, you need to run the Azure CLI [az keyvault secret set][az-keyvault-secret-set] command to create it. The password is the value you specified for the environment variable `KEYVAULT_SECRET_NAME` and stores the value of **Hello!** in it. -```azurecli +```azurecli-interactive az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET_NAME}" --value 'Hello!' ``` export KEYVAULT_URL="$(az keyvault show -g ${RESOURCE_GROUP} -n ${KEYVAULT_NAME} Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity. -```azurecli +```azurecli-interactive az account set --subscription "${SUBSCRIPTION}" ``` -```azurecli +```azurecli-interactive az identity create --name "${UAID}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}" ``` Next, you need to set an access policy for the managed identity to access the Key Vault secret by running the following commands: -```bash +```azurecli-interactive export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${UAID}" --query 'clientId' -otsv)" ``` -```azurecli +```azurecli-interactive az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}" ``` az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn Create a Kubernetes service account and annotate it with the client ID of the Managed Identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the default value for the cluster name and the resource group name. -```azurecli +```azurecli-interactive az aks get-credentials -n myAKSCluster -g "${RESOURCE_GROUP}" ``` Serviceaccount/workload-identity-sa created Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. -```azurecli +```azurecli-interactive az identity federated-credential create --name ${FICID} --identity-name ${UAID} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} ``` kubectl delete pod quick-start kubectl delete sa "${SERVICE_ACCOUNT_NAME}" --namespace "${SERVICE_ACCOUNT_NAMESPACE}" ``` -```azurecli +```azurecli-interactive az group delete --name "${RESOURCE_GROUP}" ``` |
aks | Operator Best Practices Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-identity.md | There are two levels of access needed to fully operate an AKS cluster: ## Use pod-managed identities -> **Best practice guidance** -> -> Don't use fixed credentials within pods or container images, as they are at risk of exposure or abuse. Instead, use *pod identities* to automatically request access using Azure AD. +Don't use fixed credentials within pods or container images, as they are at risk of exposure or abuse. Instead, use *pod identities* to automatically request access using Azure AD. > [!NOTE]-> Pod identities are intended for use with Linux pods and container images only. Pod-managed identities support for Windows containers is coming soon. +> Pod identities are intended for use with Linux pods and container images only. Pod-managed identities (preview) support for Windows containers is coming soon. To access other Azure resources, like Azure Cosmos DB, Key Vault, or Blob storage, the pod needs authentication credentials. You could define authentication credentials with the container image or inject them as a Kubernetes secret. Either way, you would need to manually create and assign them. Usually, these credentials are reused across pods and aren't regularly rotated. |
aks | Static Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md | This article shows you how to create a static public IP address and assign it to ## Create a static IP address -1. Use the `az aks show`[az-aks-show] command to get the node resource group name of your AKS cluster, which follows this format: `MC_<resource group name>_<AKS cluster name>_<region>`. +1. Create a resource group for your IP address ```azurecli-interactive- az aks show \ - --resource-group myResourceGroup \ - --name myAKSCluster - --query nodeResourceGroup - --output tsv + az group create --name myNetworkResourceGroup ``` -2. Use the [`az network public ip create`][az-network-public-ip-create] command to create a static public IP address. The following example creates a static IP resource named *myAKSPublicIP* in the *MC_myResourceGroup_myAKSCluster_eastus* node resource group. +2. Use the [`az network public ip create`][az-network-public-ip-create] command to create a static public IP address. The following example creates a static IP resource named *myAKSPublicIP* in the *myNetworkResourceGroup* resource group. ```azurecli-interactive az network public-ip create \- --resource-group MC_myResourceGroup_myAKSCluster_eastus \ + --resource-group myNetworkResourceGroup \ --name myAKSPublicIP \ --sku Standard \ --allocation-method static This article shows you how to create a static public IP address and assign it to 3. After you create the static public IP address, use the [`az network public-ip list`][az-network-public-ip-list] command to get the IP address. Specify the name of the node resource group and public IP address you created, and query for the *ipAddress*. ```azurecli-interactive- az network public-ip show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --query ipAddress --output tsv + az network public-ip show --resource-group myNetworkResourceGroup --name myAKSPublicIP --query ipAddress --output tsv ``` ## Create a service using the static IP address This article shows you how to create a static public IP address and assign it to 1. Before creating a service, use the [`az role assignment create`][az-role-assignment-create] command to ensure the cluster identity used by the AKS cluster has delegated permissions to the node resource group. ```azurecli-interactive+ CLIENT_ID=$(az aks show --name <cluster name> --resource-group <cluster resource group> --query identity.principalId -o tsv) + RG_SCOPE=$(az group show --name myNetworkResourceGroup --query id -o tsv) az role assignment create \- --assignee <Client ID> \ + --assignee ${CLIENT_ID} \ --role "Network Contributor" \- --scope /subscriptions/<subscription id>/resourceGroups/<MC_myResourceGroup_myAKSCluster_eastus> + --scope ${RG_SCOPE} ``` > [!IMPORTANT] This article shows you how to create a static public IP address and assign it to kind: Service metadata: annotations:- service.beta.kubernetes.io/azure-load-balancer-resource-group: MC_myResourceGroup_myAKSCluster_eastus + service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup name: azure-load-balancer spec: loadBalancerIP: 40.121.183.52 |
analysis-services | Analysis Services Create Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-terraform.md | Title: 'Quickstart: Create an Azure Analysis Services server using Terraform' description: 'In this article, you create an Azure Analysis Services server using Terraform' Previously updated : 3/10/2023- Last updated : 4/14/2023+ |
api-management | Quickstart Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-terraform.md | Title: 'Quickstart: Create an Azure API Management service using Terraform' description: 'In this article, you create an Azure API Management service using Terraform.' Previously updated : 3/13/2023- Last updated : 4/14/2023+ |
app-service | Deploy Zip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md | Publish-AzWebApp -ResourceGroupName Default-Web-WestUS -Name MyApp -ArchivePath The following example uses the cURL tool to deploy a ZIP package. Replace the placeholders `<username>`, `<zip-package-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash-curl -X POST -u <username:password> --data-binary "@<zip-package-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=zip +curl -X POST -u <username:password> -T "@<zip-package-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=zip ``` [!INCLUDE [deploying to network secured sites](../../includes/app-service-deploy-network-secured-sites.md)] Publish-AzWebapp -ResourceGroupName <group-name> -Name <app-name> -ArchivePath < The following example uses the cURL tool to deploy a .war, .jar, or .ear file. Replace the placeholders `<username>`, `<file-path>`, `<app-name>`, and `<package-type>` (`war`, `jar`, or `ear`, accordingly). When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash-curl -X POST -u <username> --data-binary @"<file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=<package-type> +curl -X POST -u <username> -T @"<file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=<package-type> ``` [!INCLUDE [deploying to network secured sites](../../includes/app-service-deploy-network-secured-sites.md)] Not supported. See Azure CLI or Kudu API. The following example uses the cURL tool to deploy a startup file for their application.Replace the placeholders `<username>`, `<startup-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash-curl -X POST -u <username> --data-binary @"<startup-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=startup +curl -X POST -u <username> -T @"<startup-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=startup ``` ### Deploy a library file curl -X POST -u <username> --data-binary @"<startup-file-path>" https://<app-nam The following example uses the cURL tool to deploy a library file for their application. Replace the placeholders `<username>`, `<lib-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash-curl -X POST -u <username> --data-binary @"<lib-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=lib&path="/home/site/deployments/tools/my-lib.jar" +curl -X POST -u <username> -T @"<lib-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=lib&path="/home/site/deployments/tools/my-lib.jar" ``` ### Deploy a static file curl -X POST -u <username> --data-binary @"<lib-file-path>" https://<app-name>.s The following example uses the cURL tool to deploy a config file for their application. Replace the placeholders `<username>`, `<config-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash-curl -X POST -u <username> --data-binary @"<config-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=static&path="/home/site/deployments/tools/my-config.json" +curl -X POST -u <username> -T @"<config-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=static&path="/home/site/deployments/tools/my-config.json" ``` # [Kudu UI](#tab/kudu-ui) |
azure-app-configuration | Cli Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/cli-samples.md | Title: Azure CLI samples - Azure App Configuration description: Information about sample scripts provided for Azure App Configuration--++ Last updated 08/09/2022 |
azure-app-configuration | Concept Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-customer-managed-keys.md | Title: Use customer-managed keys to encrypt your configuration data description: Encrypt your configuration data using customer-managed keys--++ Last updated 08/30/2022 |
azure-app-configuration | Concept Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md | Title: Azure App Configuration resiliency and disaster recovery description: Lean how to implement resiliency and disaster recovery with Azure App Configuration.--++ Last updated 07/09/2020 |
azure-app-configuration | Concept Enable Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-enable-rbac.md | Title: Authorize access to Azure App Configuration using Azure Active Directory description: Enable Azure RBAC to authorize access to your Azure App Configuration instance--++ Last updated 05/26/2020 |
azure-app-configuration | Concept Feature Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-feature-management.md | Title: Understand feature management using Azure App Configuration description: Turn features on and off using Azure App Configuration --++ |
azure-app-configuration | Concept Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md | Title: Geo-replication in Azure App Configuration description: Details of the geo-replication feature in Azure App Configuration. --++ |
azure-app-configuration | Concept Github Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-github-action.md | Title: Sync your GitHub repository to App Configuration description: Use GitHub Actions to automatically update your App Configuration instance when you update your GitHub repository.--++ Last updated 05/28/2020 |
azure-app-configuration | Concept Key Value | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-key-value.md | Title: Understand Azure App Configuration key-value store description: Understand key-value storage in Azure App Configuration, which stores configuration data as key-values. Key-values are a representation of application settings.--++ Last updated 09/14/2022 |
azure-app-configuration | Concept Point Time Snapshot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-point-time-snapshot.md | Title: Retrieve key-values from a point-in-time description: Retrieve old key-value pairs using point-in-time snapshots in Azure App Configuration, which maintains a record of changes to key-values. --++ Last updated 03/14/2022 |
azure-app-configuration | Concept Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-private-endpoint.md | Title: Using private endpoints for Azure App Configuration description: Secure your App Configuration store using private endpoints --++ Last updated 07/15/2020 |
azure-app-configuration | Enable Dynamic Configuration Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md | ms.devlang: csharp Last updated 07/01/2019-+ #Customer intent: I want to dynamically update my app to use the latest configuration data in App Configuration. |
azure-app-configuration | Enable Dynamic Configuration Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet.md | Title: '.NET Framework Tutorial: dynamic configuration in Azure App Configuration' description: In this tutorial, you learn how to dynamically update the configuration data for .NET Framework apps using Azure App Configuration. -+ ms.devlang: csharp Last updated 03/20/2023-+ #Customer intent: I want to dynamically update my .NET Framework app to use the latest configuration data in App Configuration. |
azure-app-configuration | Howto App Configuration Event | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md | Title: Use Event Grid for App Configuration data change notifications description: Learn how to use Azure App Configuration event subscriptions to send key-value modification events to a web endpoint -+ ms.assetid: ms.devlang: csharp Last updated 03/04/2020-+ |
azure-app-configuration | Howto Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md | Title: Azure App Configuration best practices | Microsoft Docs description: Learn best practices while using Azure App Configuration. Topics covered include key groupings, key-value compositions, App Configuration bootstrap, and more. documentationcenter: ''-+ editor: '' ms.assetid: Last updated 09/21/2022-+ |
azure-app-configuration | Howto Disable Public Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-disable-public-access.md | Title: How to disable public access in Azure App Configuration description: How to disable public access to your Azure App Configuration store.--++ Last updated 07/12/2022 |
azure-app-configuration | Howto Feature Filters Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md | description: Learn how to use feature filters to enable conditional feature flag ms.devlang: csharp --++ Last updated 3/9/2020 |
azure-app-configuration | Howto Import Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md | Title: Import or export data with Azure App Configuration description: Learn how to import or export configuration data to or from Azure App Configuration. Exchange data between your App Configuration store and code project. -+ Last updated 08/24/2022-+ # Import or export configuration data |
azure-app-configuration | Howto Integrate Azure Managed Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md | Title: Use managed identities to access App Configuration description: Authenticate to Azure App Configuration using managed identities--++ |
azure-app-configuration | Howto Labels Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-labels-aspnet-core.md | |
azure-app-configuration | Howto Move Resource Between Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-move-resource-between-regions.md | Title: Move an App Configuration store to another region description: Learn how to move an App Configuration store to a different region. --++ Last updated 03/27/2023 |
azure-app-configuration | Howto Set Up Private Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-set-up-private-access.md | Title: How to set up private access to an Azure App Configuration store description: How to set up private access to an Azure App Configuration store in the Azure portal and in the CLI.--++ Last updated 07/12/2022 |
azure-app-configuration | Howto Targetingfilter Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-targetingfilter-aspnet-core.md | |
azure-app-configuration | Integrate Ci Cd Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md | Title: Integrate Azure App Configuration using a continuous integration and delivery pipeline description: Learn to implement continuous integration and delivery using Azure App Configuration -+ Last updated 08/30/2022-+ # Customer intent: I want to use Azure App Configuration data in my CI/CD pipeline. |
azure-app-configuration | Integrate Kubernetes Deployment Helm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-kubernetes-deployment-helm.md | Title: Integrate Azure App Configuration with Kubernetes Deployment using Helm description: Learn how to use dynamic configurations in Kubernetes deployment with Helm. -+ Last updated 03/27/2023-+ #Customer intent: I want to use Azure App Configuration data in Kubernetes deployment with Helm. |
azure-app-configuration | Manage Feature Flags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/manage-feature-flags.md | |
azure-app-configuration | Monitor App Configuration Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md | Title: Monitoring Azure App Configuration data reference description: Important Reference material needed when you monitor App Configuration --++ Last updated 05/05/2021 |
azure-app-configuration | Monitor App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md | Title: Monitor Azure App Configuration description: Start here to learn how to monitor App Configuration --++ Last updated 05/05/2021 |
azure-app-configuration | Overview Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview-managed-identity.md | Title: Configure managed identities with Azure App Configuration description: Learn how managed identities work in Azure App Configuration and how to configure a managed identity-+ Last updated 02/25/2020-+ |
azure-app-configuration | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview.md | Title: What is Azure App Configuration? description: Read an overview of the Azure App Configuration service. Understand why you would want to use App Configuration, and learn how you can use it.--++ Last updated 03/20/2023 |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 02/21/2023 --++ |
azure-app-configuration | Powershell Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/powershell-samples.md | |
azure-app-configuration | Pull Key Value Devops Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md | Title: Pull settings to App Configuration with Azure Pipelines description: Learn to use Azure Pipelines to pull key-values to an App Configuration Store -+ Last updated 11/17/2020-+ # Pull settings to App Configuration with Azure Pipelines |
azure-app-configuration | Push Kv Devops Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-devops-pipeline.md | Title: Push settings to App Configuration with Azure Pipelines description: Learn to use Azure Pipelines to push key-values to an App Configuration Store -+ Last updated 02/23/2021-+ # Push settings to App Configuration with Azure Pipelines |
azure-app-configuration | Quickstart Azure App Configuration Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-app-configuration-create.md | Title: "Quickstart: Create an Azure App Configuration store"--++ description: "In this quickstart, learn how to create an App Configuration store." ms.devlang: csharp |
azure-app-configuration | Quickstart Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-bicep.md | Title: Create an Azure App Configuration store using Bicep description: Learn how to create an Azure App Configuration store using Bicep.--++ Last updated 05/06/2022 |
azure-app-configuration | Quickstart Container Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-container-apps.md | Title: "Quickstart: Use Azure App Configuration in Azure Container Apps" description: Learn how to connect a containerized application to Azure App Configuration, using Service Connector. -+ Last updated 03/02/2023-+ |
azure-app-configuration | Quickstart Dotnet App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-app.md | Title: Quickstart for Azure App Configuration with .NET Framework | Microsoft Do description: In this article, create a .NET Framework app with Azure App Configuration to centralize storage and management of application settings separate from your code. documentationcenter: ''-+ ms.devlang: csharp Last updated 02/28/2023-+ #Customer intent: As a .NET Framework developer, I want to manage all my app settings in one place. # Quickstart: Create a .NET Framework app with Azure App Configuration |
azure-app-configuration | Quickstart Dotnet Core App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-core-app.md | Title: Quickstart for Azure App Configuration with .NET Core | Microsoft Docs description: In this quickstart, create a .NET Core app with Azure App Configuration to centralize storage and management of application settings separate from your code. -+ ms.devlang: csharp Last updated 03/20/2023-+ #Customer intent: As a .NET Core developer, I want to manage all my app settings in one place. # Quickstart: Create a .NET Core app with App Configuration |
azure-app-configuration | Quickstart Feature Flag Azure Functions Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md | Title: Quickstart for adding feature flags to Azure Functions | Microsoft Docs description: In this quickstart, use Azure Functions with feature flags from Azure App Configuration and test the function locally. -+ ms.devlang: csharp Last updated 3/20/2023-+ # Quickstart: Add feature flags to an Azure Functions app |
azure-app-configuration | Quickstart Feature Flag Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md | Title: Quickstart for adding feature flags to .NET Framework apps | Microsoft Do description: A quickstart for adding feature flags to .NET Framework apps and managing them in Azure App Configuration documentationcenter: ''-+ editor: '' ms.assetid: |
azure-app-configuration | Quickstart Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-javascript.md | Title: Quickstart for using Azure App Configuration with JavaScript apps | Microsoft Docs description: In this quickstart, create a Node.js app with Azure App Configuration to centralize storage and management of application settings separate from your code. -+ ms.devlang: javascript Last updated 03/20/2023-+ #Customer intent: As a JavaScript developer, I want to manage all my app settings in one place. # Quickstart: Create a JavaScript app with Azure App Configuration |
azure-app-configuration | Quickstart Python Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python-provider.md | Title: Quickstart for using Azure App Configuration with Python apps | Microsoft Learn description: In this quickstart, create a Python app with the Azure App Configuration to centralize storage and management of application settings separate from your code. -+ ms.devlang: python Last updated 03/20/2023-+ #Customer intent: As a Python developer, I want to manage all my app settings in one place. # Quickstart: Create a Python app with Azure App Configuration |
azure-app-configuration | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python.md | Title: Using Azure App Configuration in Python apps with the Azure SDK for Python | Microsoft Learn description: This document shows examples of how to use the Azure SDK for Python to access your data in Azure App Configuration. -+ ms.devlang: python Last updated 11/17/2022-+ #Customer intent: As a Python developer, I want to use the Azure SDK for Python to access my data in Azure App Configuration. # Create a Python app with the Azure SDK for Python |
azure-app-configuration | Quickstart Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-resource-manager.md | Title: Create an Azure App Configuration store by using Azure Resource Manager template (ARM template) description: Learn how to create an Azure App Configuration store by using Azure Resource Manager template (ARM template).--++ Last updated 06/09/2021 |
azure-app-configuration | Rest Api Authentication Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-azure-ad.md | Title: Azure Active Directory REST API - authentication description: Use Azure Active Directory to authenticate to Azure App Configuration by using the REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Authentication Hmac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-hmac.md | Title: Azure App Configuration REST API - HMAC authentication description: Use HMAC to authenticate to Azure App Configuration by using the REST API--++ ms.devlang: csharp, golang, java, javascript, powershell, python |
azure-app-configuration | Rest Api Authentication Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-index.md |  Title: Azure App Configuration REST API - Authentication description: Reference pages for authentication using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Authorization Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-azure-ad.md | Title: Azure App Configuration REST API - Azure Active Directory authorization description: Use Azure Active Directory for authorization against Azure App Configuration by using the REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Authorization Hmac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-hmac.md | Title: Azure App Configuration REST API - HMAC authorization description: Use HMAC for authorization against Azure App Configuration using the REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Authorization Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-index.md |  Title: Azure App Configuration REST API - Authorization description: Reference pages for authorization using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Consistency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-consistency.md |  Title: Azure App Configuration REST API - consistency description: Reference pages for ensuring real-time consistency by using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Fiddler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-fiddler.md |  Title: Azure Active Directory REST API - Test Using Fiddler description: Use Fiddler to test the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Headers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-headers.md | Title: Azure App Configuration REST API - Headers description: Reference pages for headers used with the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Key Value | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-key-value.md |  Title: Azure App Configuration REST API - key-value description: Reference pages for working with key-values by using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-keys.md | Title: Azure App Configuration REST API - Keys description: Reference pages for working with keys using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-labels.md | Title: Azure App Configuration REST API - Labels description: Reference pages for working with labels using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Locks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-locks.md | Title: Azure App Configuration REST API - locks description: Reference pages for working with key-value locks by using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Postman | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-postman.md |  Title: Azure Active Directory REST API - Test by using Postman description: Use Postman to test the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Revisions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-revisions.md | Title: Azure App Configuration REST API - key-value revisions description: Reference pages for working with key-value revisions by using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Throttling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-throttling.md | Title: Azure App Configuration REST API - Throttling description: Reference pages for understanding throttling when using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-versioning.md | Title: Azure App Configuration REST API - versioning description: Reference pages for versioning by using the Azure App Configuration REST API--++ Last updated 08/17/2020 |
azure-app-configuration | Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api.md | Title: Azure App Configuration REST API description: Reference pages for the Azure App Configuration REST API--++ Last updated 11/28/2022 |
azure-app-configuration | Cli Create Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-create-service.md | Title: Azure CLI Script Sample - Create an Azure App Configuration Store description: Create an Azure App Configuration store using a sample Azure CLI script. See reference article links to commands used in the script. -+ Last updated 01/18/2023-+ |
azure-app-configuration | Cli Delete Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-delete-service.md | Title: Azure CLI Script Sample - Delete an Azure App Configuration Store description: Delete an Azure App Configuration store using a sample Azure CLI script. See reference article links to commands used in the script. -+ ms.devlang: azurecli Last updated 02/19/2020-+ |
azure-app-configuration | Cli Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-export.md | Title: Azure CLI Script Sample - Export from an Azure App Configuration Store description: Use Azure CLI script to export configuration from Azure App Configuration -+ ms.devlang: azurecli Last updated 02/19/2020-+ |
azure-app-configuration | Cli Import | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-import.md | Title: Azure CLI script sample - Import to an App Configuration store description: Use Azure CLI script - Importing configuration to Azure App Configuration -+ ms.devlang: azurecli Last updated 02/19/2020-+ |
azure-app-configuration | Cli Work With Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-work-with-keys.md | Title: Azure CLI Script Sample - Work with key-values in App Configuration Store description: Use Azure CLI script to create, view, update and delete key values from App Configuration store -+ ms.devlang: azurecli Last updated 02/19/2020-+ |
azure-app-configuration | Powershell Create Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-create-service.md | Title: PowerShell script sample - Create an Azure App Configuration store description: Create an Azure App Configuration store using a sample PowerShell script. See reference article links to commands used in the script. -+ Last updated 02/12/2023-+ |
azure-app-configuration | Powershell Delete Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-delete-service.md | Title: PowerShell script sample - Delete an Azure App Configuration store description: Delete an Azure App Configuration store using a sample PowerShell script. See reference article links to commands used in the script. -+ Last updated 02/02/2023-+ |
azure-app-configuration | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 02/14/2023 --++ |
azure-app-configuration | Use Feature Flags Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md | Title: Tutorial for using feature flags in a .NET Core app | Microsoft Docs description: In this tutorial, you learn how to implement feature flags in .NET Core apps. documentationcenter: ''-+ editor: '' ms.assetid: |
azure-app-configuration | Use Key Vault References Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md | Title: Tutorial for using Azure App Configuration Key Vault references in an ASP description: In this tutorial, you learn how to use Azure App Configuration's Key Vault references from an ASP.NET Core app documentationcenter: ''-+ editor: '' ms.assetid: |
azure-cache-for-redis | Cache Redis Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-modules.md | With Azure Cache for Redis, you can use Redis modules as libraries to add more d For more information on creating an Enterprise cache, see [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md). -Modules were introduced in open-source Redis 4.0. The modules extend the use-cases of Redis by adding functionality like search capabilities and data structures like **bloom and cuckoo filters**. +Modules were introduced in open-source Redis 4.0. The modules extend the use-cases of Redis by adding functionality like search capabilities and data structures like bloom and cuckoo filters. ## Scope of Redis modules Features include: - Geo-filtering - Boolean queries -Additionally, **RediSearch** can function as a secondary index, expanding your cache beyond a key-value structure and offering more sophisticated queries. +Additionally, **RediSearch** can function as a secondary index, expanding your cache beyond a key-value structure and offering more sophisticated queries. -You can use **RediSearch** is used in a wide variety of use-cases, including real-time inventory, enterprise search, and in indexing external databases. [For more information, see the RediSearch documentation page](https://redis.io/docs/stack/search/). +**RediSearch** also includes functionality to perform [vector similarity queries](https://redis.io/docs/stack/search/reference/vectors/) such as K-nearest neighbor (KNN) search. This feature allows Azure Cache for Redis to be used as a vector database, which is useful in AI use-cases like [semantic answer engines or any other application that requires the comparison of embeddings vectors](https://redis.com/blog/rediscover-redis-for-vector-similarity-search/) generated by machine learning models. ++You can use **RediSearch** is used in a wide variety of additional use-cases, including real-time inventory, enterprise search, and in indexing external databases. [For more information, see the RediSearch documentation page](https://redis.io/docs/stack/search/). >[!IMPORTANT] > The RediSearch module can only be used with the `Enterprise` clustering policy. For more information, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy). ->[!NOTE] -> The RediSearch module is the only module that can be used with active geo-replication. - ### RedisBloom RedisBloom adds four probabilistic data structures to a Redis server: **bloom filter**, **cuckoo filter**, **count-min sketch**, and **top-k**. Each of these data structures offers a way to sacrifice perfect accuracy in return for higher speed and better memory efficiency. |
azure-functions | Functions Bindings Azure Sql Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md | namespace AzureSQL.ToDo [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequestData req, FunctionContext executionContext) {- var logger = executionContext.GetLogger("HttpExample"); + var logger = executionContext.GetLogger("PostToDo"); logger.LogInformation("C# HTTP trigger function processed a request."); string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives; using Newtonsoft.Json; -public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog) +public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem) { log.LogInformation("C# HTTP trigger function processed a request."); string requestBody = new StreamReader(req.Body).ReadToEnd(); todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); - requestLog = new RequestLog(); - requestLog.RequestTimeStamp = DateTime.Now; - requestLog.ItemCount = 1; - return new OkObjectResult(todoItem); }--public class RequestLog { - public DateTime RequestTimeStamp { get; set; } - public int ItemCount { get; set; } -} ``` <a id="http-trigger-write-to-two-tables-csharpscript"></a> using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives; using Newtonsoft.Json; -public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem) +public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog) { log.LogInformation("C# HTTP trigger function processed a request."); string requestBody = new StreamReader(req.Body).ReadToEnd(); todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); + requestLog = new RequestLog(); + requestLog.RequestTimeStamp = DateTime.Now; + requestLog.ItemCount = 1; + return new OkObjectResult(todoItem); }++public class RequestLog { + public DateTime RequestTimeStamp { get; set; } + public int ItemCount { get; set; } +} ``` |
azure-functions | Functions Bindings Azure Sql Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md | Title: Azure SQL trigger for Functions description: Learn to use the Azure SQL trigger in Azure Functions. Previously updated : 11/10/2022 Last updated : 4/14/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure SQL trigger for Functions (preview) - > [!NOTE]-> The Azure SQL trigger is only supported on **Premium and Dedicated** plans. Consumption is not currently supported. +> The Azure SQL trigger for Functions is currently in preview and requires that a preview extension library or extension bundle is used. -The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted. +The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted. For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md). -For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md). +The Azure SQL trigger scaling decisions for the Consumption and Premium plans are done via target-based scaling. For more information, see [Target-based scaling](functions-target-based-scaling.md). ## Functionality Overview -The Azure SQL Trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level, the loop looks like this: +The Azure SQL trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level, the loop looks like this: ``` while (true) { Changes are processed in the order that their changes were made, with the oldest For more information on change tracking and how it's used by applications such as Azure SQL triggers, see [work with change tracking](/sql/relational-databases/track-changes/work-with-change-tracking-sql-server) . ++ ## Example usage+<a id="example"></a> -More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp). ++# [In-process](#tab/in-process) ++More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-csharp). The example refers to a `ToDoItem` class and a corresponding database table: The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` - **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class. - **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`. -# [In-process](#tab/in-process) - The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table: ```cs namespace AzureSQL.ToDo # [Isolated process](#tab/isolated-process) -Isolated worker process isn't currently supported. +More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-outofproc). +++The example refers to a `ToDoItem` class and a corresponding database table: ++++[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table: ++```sql +ALTER DATABASE [SampleDatabase] +SET CHANGE_TRACKING = ON +(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ++ALTER TABLE [dbo].[ToDo] +ENABLE CHANGE_TRACKING; +``` ++The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties: +- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class. +- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`. ++The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table: ++```cs +using System; +using System.Collections.Generic; +using Microsoft.Azure.Functions.Worker; +using Microsoft.Azure.Functions.Worker.Extensions.Sql; +using Microsoft.Extensions.Logging; +using Newtonsoft.Json; +++namespace AzureSQL.ToDo +{ + public static class ToDoTrigger + { + [FunctionName("ToDoTrigger")] + public static void Run( + [SqlTrigger("[dbo].[ToDo]", "SqlConnectionString")] + IReadOnlyList<SqlChange<ToDoItem>> changes, + FunctionContext context) + { + var logger = context.GetLogger("ToDoTrigger"); + foreach (SqlChange<ToDoItem> change in changes) + { + ToDoItem toDoItem = change.Item; + logger.LogInformation($"Change operation: {change.Operation}"); + logger.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}"); + } + } + } +} +``` + -<!-- Uncomment to support C# script examples. # [C# Script](#tab/csharp-script) >+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-csharpscript). +++The example refers to a `ToDoItem` class and a corresponding database table: ++++[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table: ++```sql +ALTER DATABASE [SampleDatabase] +SET CHANGE_TRACKING = ON +(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ++ALTER TABLE [dbo].[ToDo] +ENABLE CHANGE_TRACKING; +``` ++The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties: +- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class. +- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`. ++The following example shows a SQL trigger in a function.json file and a [C# script function](functions-reference-csharp.md) that is invoked when there are changes to the `ToDo` table: ++The following is binding data in the function.json file: ++```json +{ + "name": "todoChanges", + "type": "sqlTrigger", + "direction": "in", + "tableName": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +} +``` +The following is the C# script function: ++```csharp +#r "Newtonsoft.Json" ++using System.Net; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; ++public static void Run(IReadOnlyList<SqlChange<ToDoItem>> todoChanges, ILogger log) +{ + log.LogInformation($"C# SQL trigger function processed a request."); ++ foreach (SqlChange<ToDoItem> change in todoChanges) + { + ToDoItem toDoItem = change.Item; + log.LogInformation($"Change operation: {change.Operation}"); + log.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}"); + } +} +``` + +++## Example usage +<a id="example"></a> ++More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-java). +++The example refers to a `ToDoItem` class, a `SqlChangeToDoItem` class, a `SqlChangeOperation` enum, and a corresponding database table: ++In a separate file `ToDoItem.java`: ++```java +package com.function; +import java.util.UUID; ++public class ToDoItem { + public UUID Id; + public int order; + public String title; + public String url; + public boolean completed; ++ public ToDoItem() { + } ++ public ToDoItem(UUID Id, int order, String title, String url, boolean completed) { + this.Id = Id; + this.order = order; + this.title = title; + this.url = url; + this.completed = completed; + } +} +``` ++In a separate file `SqlChangeToDoItem.java`: +```java +package com.function; ++public class SqlChangeToDoItem { + public ToDoItem item; + public SqlChangeOperation operation; ++ public SqlChangeToDoItem() { + } ++ public SqlChangeToDoItem(ToDoItem item, SqlChangeOperation operation) { + this.item = item; + this.operation = operation; + } +} +``` ++In a separate file `SqlChangeOperation.java`: +```java +package com.function; ++import com.google.gson.annotations.SerializedName; ++public enum SqlChangeOperation { + @SerializedName("0") + Insert, + @SerializedName("1") + Update, + @SerializedName("2") + Delete; +} +``` +++[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table: ++```sql +ALTER DATABASE [SampleDatabase] +SET CHANGE_TRACKING = ON +(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ++ALTER TABLE [dbo].[ToDo] +ENABLE CHANGE_TRACKING; +``` ++The SQL trigger binds to a `SqlChangeToDoItem[]`, an array of `SqlChangeToDoItem` objects each with two properties: +- **item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class. +- **operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`. +++The following example shows a Java function that is invoked when there are changes to the `ToDo` table: ++```java +package com.function; ++import com.microsoft.azure.functions.ExecutionContext; +import com.microsoft.azure.functions.annotation.FunctionName; +import com.microsoft.azure.functions.sql.annotation.SQLTrigger; +import com.function.Common.SqlChangeToDoItem; +import com.google.gson.Gson; ++import java.util.logging.Level; ++public class ProductsTrigger { + @FunctionName("ToDoTrigger") + public void run( + @SQLTrigger( + name = "todoItems", + tableName = "[dbo].[ToDo]", + connectionStringSetting = "SqlConnectionString") + SqlChangeToDoItem[] todoItems, + ExecutionContext context) { ++ context.getLogger().log(Level.INFO, "SQL Changes: " + new Gson().toJson(changes)); + } +} +``` +++++## Example usage +<a id="example"></a> ++More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-powershell). +++The example refers to a `ToDoItem` database table: +++[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table: ++```sql +ALTER DATABASE [SampleDatabase] +SET CHANGE_TRACKING = ON +(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ++ALTER TABLE [dbo].[ToDo] +ENABLE CHANGE_TRACKING; +``` ++The SQL trigger binds to `todoChanges`, a list of objects each with two properties: +- **item:** the item that was changed. The structure of the item will follow the table schema. +- **operation:** the possible values are `Insert`, `Update`, and `Delete`. +++The following example shows a PowerShell function that is invoked when there are changes to the `ToDo` table. ++The following is binding data in the function.json file: ++```json +{ + "name": "todoChanges", + "type": "sqlTrigger", + "direction": "in", + "tableName": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +} +``` +++The [configuration](#configuration) section explains these properties. ++The following is sample PowerShell code for the function in the `run.ps1` file: ++```powershell +using namespace System.Net ++param($todoChanges) +# The output is used to inspect the trigger binding parameter in test methods. +# Use -Compress to remove new lines and spaces for testing purposes. +$changesJson = $todoChanges | ConvertTo-Json -Compress +Write-Host "SQL Changes: $changesJson" +``` +++++++## Example usage +<a id="example"></a> ++More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-js). +++The example refers to a `ToDoItem` database table: +++[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table: ++```sql +ALTER DATABASE [SampleDatabase] +SET CHANGE_TRACKING = ON +(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ++ALTER TABLE [dbo].[ToDo] +ENABLE CHANGE_TRACKING; +``` ++The SQL trigger binds `todoChanges`, an array of objects each with two properties: +- **item:** the item that was changed. The structure of the item will follow the table schema. +- **operation:** the possible values are `Insert`, `Update`, and `Delete`. +++The following example shows a JavaScript function that is invoked when there are changes to the `ToDo` table. ++The following is binding data in the function.json file: ++```json +{ + "name": "todoChanges", + "type": "sqlTrigger", + "direction": "in", + "tableName": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +} +``` +++The [configuration](#configuration) section explains these properties. ++The following is sample JavaScript code for the function in the `index.js` file: ++```javascript +module.exports = async function (context, todoChanges) { + context.log(`SQL Changes: ${JSON.stringify(todoChanges)}`) +} +``` ++++++## Example usage +<a id="example"></a> ++More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-python). +++The example refers to a `ToDoItem` database table: ++++[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table: ++```sql +ALTER DATABASE [SampleDatabase] +SET CHANGE_TRACKING = ON +(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ++ALTER TABLE [dbo].[ToDo] +ENABLE CHANGE_TRACKING; +``` ++The SQL trigger binds to a variable `todoChanges`, a list of objects each with two properties: +- **item:** the item that was changed. The structure of the item will follow the table schema. +- **operation:** the possible values are `Insert`, `Update`, and `Delete`. +++The following example shows a Python function that is invoked when there are changes to the `ToDo` table. ++The following is binding data in the function.json file: ++```json +{ + "name": "todoChanges", + "type": "sqlTrigger", + "direction": "in", + "tableName": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +} +``` +++The [configuration](#configuration) section explains these properties. ++The following is sample Python code for the function in the `__init__.py` file: ++```python +import json +import logging ++def main(changes): + logging.info("SQL Changes: %s", json.loads(changes)) +``` ++++++ ## Attributes The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties: The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https: | **TableName** | Required. The name of the table monitored by the trigger. | | **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.| -## Configuration -<!-- ### for another day ### ++++## Annotations ++In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@SQLTrigger` annotation (`com.microsoft.azure.functions.sql.annotation.SQLTrigger`) on parameters whose value would come from Azure SQL. This annotation supports the following elements: ++| Element |Description| +||| +| **name** | Required. The name of the parameter that the trigger binds to. | +| **tableName** | Required. The name of the table monitored by the trigger. | +| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.| ++++## Configuration The following table explains the binding configuration properties that you set in the function.json file. |function.json property | Description|+||-| +| **name** | Required. The name of the parameter that the trigger binds to. | +| **type** | Required. Must be set to `sqlTrigger`. | +| **direction** | Required. Must be set to `in`. | +| **tableName** | Required. The name of the table monitored by the trigger. | +| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.| >+## Optional Configuration In addition to the required ConnectionStringSetting [application setting](./functions-how-to-use-azure-function-app-settings.md#settings), the following optional settings can be configured for the SQL trigger: If the function execution fails five times in a row for a given row then that ro - [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md) - [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md) --> [!NOTE] -> In the current preview, Azure SQL triggers are only supported by [C# class library functions](functions-dotnet-class-library.md) - |
azure-functions | Functions Bindings Azure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md | description: Understand how to use Azure SQL bindings in Azure Functions. Previously updated : 4/7/2023 Last updated : 4/14/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers Add the Java library for SQL bindings to your functions project with an update t <dependency> <groupId>com.microsoft.azure.functions</groupId> <artifactId>azure-functions-java-library-sql</artifactId>- <version>0.1.1</version> + <version>2.0.0-preview</version> </dependency> ``` |
azure-functions | Functions Bindings Warmup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md | The following considerations apply to using a warmup function in C#: # [Isolated process](#tab/isolated-process) -- Your function must be named `warmup` (case-insensitive) using the `FunctionName` attribute.+- Your function must be named `warmup` (case-insensitive) using the `Function` attribute. - A return value attribute isn't required. - You can pass an object instance to the function. |
azure-functions | Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md | The following are a common, _but by no means exhaustive_, set of scenarios for A | **Create reliable message queue systems** | Process message queues using [Queue Storage](./functions-bindings-storage-queue.md), [Service Bus](./functions-bindings-service-bus.md), or [Event Hubs](./functions-bindings-event-hubs.md) | | **Analyze IoT data streams** | Collect and process [data from IoT devices](./functions-bindings-event-iot.md) | | **Process data in real time** | Use [Functions and SignalR](./functions-bindings-signalr-service.md) to respond to data in the moment |+| **Connect to a SQL database** | Use [SQL bindings](./functions-bindings-azure-sql.md) to read or write data from Azure SQL | These scenarios allow you to build event-driven systems using modern architectural patterns. |
azure-functions | Functions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md | The following components support identity-based connections: | Azure Blobs triggers and bindings | All | [Azure Blobs extension version 5.0.0 or later][blobv5],<br/>[Extension bundle 3.3.0 or later][blobv5] | | Azure Queues triggers and bindings | All | [Azure Queues extension version 5.0.0 or later][queuev5],<br/>[Extension bundle 3.3.0 or later][queuev5] | | Azure Tables (when using Azure Storage) | All | [Azure Tables extension version 1.0.0 or later](./functions-bindings-storage-table.md#table-api-extension),<br/>[Extension bundle 3.3.0 or later][tablesv1] |+| Azure SQL Database | All | [Connect a function app to Azure SQL with managed identity and SQL bindings][azuresql-identity] | Azure Event Hubs triggers and bindings | All | [Azure Event Hubs extension version 5.0.0 or later][eventhubv5],<br/>[Extension bundle 3.3.0 or later][eventhubv5] | | Azure Service Bus triggers and bindings | All | [Azure Service Bus extension version 5.0.0 or later][servicebusv5],<br/>[Extension bundle 3.3.0 or later][servicebusv5] | | Azure Cosmos DB triggers and bindings | All | [Azure Cosmos DB extension version 4.0.0 or later][cosmosv4],<br/> [Extension bundle 4.0.2 or later][cosmosv4]| The following components support identity-based connections: [tablesv1]: ./functions-bindings-storage-table.md#table-api-extension [signalr]: ./functions-bindings-signalr-service.md#install-extension [durable-identity]: ./durable/durable-functions-configure-durable-functions-with-credentials.md+[azuresql-identity]: ./functions-identity-access-azure-sql-with-managed-identity.md [!INCLUDE [functions-identity-based-connections-configuration](../../includes/functions-identity-based-connections-configuration.md)] |
azure-maps | How To Manage Account Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-account-keys.md | You can manage your Azure Maps account through the Azure portal. After you have ## Prerequisites -- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.-- For picking account location and you're unfamiliar with managed identities for Azure resources, check out the [overview section](../active-directory/managed-identities-azure-resources/overview.md).+- If you don't already have an Azure account, [sign up for a free account] before you continue. +- For picking account location, if you're unfamiliar with managed identities for Azure resources, see [managed identities for Azure resources]. ## Account location -Picking a location for your Azure Maps account that aligns with other resources in your subscription, like managed identities, may help to improve the level of service for [control-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) operations. +Picking a location for your Azure Maps account that aligns with other resources in your subscription, like managed identities, may help to improve the level of service for [control-plane] operations. -As an example, the managed identity infrastructure will communicate and notify the Azure Maps management services for changes to the identity resource such as credential renewal or deletion. Sharing the same Azure location enables a consistent infrastructure provisioning for all resources. +As an example, the managed identity infrastructure notifies the Azure Maps management services for changes to the identity resource such as credential renewal or deletion. Sharing the same Azure location enables a consistent infrastructure provisioning for all resources. -Any Azure Maps REST API on endpoint `atlas.microsoft.com`, `*.atlas.microsoft.com`, or other endpoints belonging to the Azure data-plane are not affected by the choice of the Azure Maps account location. +An Azure Maps account, regardless of location, can access any endpoint belonging to the Azure data-plane, such as `atlas.microsoft.com` and `*.atlas.microsoft.com`, when using Azure Maps REST API. -Read more about data-plane service coverage for Azure Maps services on [geographic coverage](./geographic-coverage.md). +Read more about data-plane service coverage for Azure Maps services on [geographic coverage]. ## Create a new account -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Azure portal]. 2. Select **Create a resource** in the upper-left corner of the Azure portal. You then see a confirmation page. You can confirm the deletion of your account b Set up authentication with Azure Maps and learn how to get an Azure Maps subscription key: > [!div class="nextstepaction"]-> [Manage authentication](how-to-manage-authentication.md) +> [Manage authentication] Learn how to manage an Azure Maps account pricing tier: > [!div class="nextstepaction"]-> [Manage a pricing tier](how-to-manage-pricing-tier.md) +> [Manage a pricing tier] Learn how to see the API usage metrics for your Azure Maps account: > [!div class="nextstepaction"]-> [View usage metrics](how-to-view-api-usage.md) +> [View usage metrics] ++[Azure portal]: https://portal.azure.com +[control-plane]: ../azure-resource-manager/management/control-plane-and-data-plane.md +[geographic coverage]: geographic-coverage.md +[Manage a pricing tier]: how-to-manage-pricing-tier.md +[Manage authentication]: how-to-manage-authentication.md +[managed identities for Azure resources]: ../active-directory/managed-identities-azure-resources/overview.md +[sign up for a free account]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F +[View usage metrics]: how-to-view-api-usage.md |
azure-maps | How To Manage Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-authentication.md | custom.ms: subject-rbac-steps # Manage authentication in Azure Maps -When you create an Azure Maps account, your client ID is automatically generated along with primary and secondary keys that are required for authentication when using [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) or [Shared Key authentication](./azure-maps-authentication.md#shared-key-authentication). +When you create an Azure Maps account, your client ID and shared keys are created automatically. These values are required for authentication when using either [Azure Active Directory (Azure AD)] or [Shared Key authentication]. ## Prerequisites -Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. -- A familiarization with [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). Be sure to understand the two [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and how they differ.-- [An Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).-- A familiarization with [Azure Maps Authentication](./azure-maps-authentication.md).+Sign in to the [Azure portal]. If you don't have an Azure subscription, create a [free account] before you begin. ++- A familiarization with [managed identities for Azure resources]. Be sure to understand the two [Managed identity types] and how they differ. +- [An Azure Maps account]. +- A familiarization with [Azure Maps Authentication]. ## View authentication details Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Az To view your Azure Maps authentication details: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Azure portal]. 2. Select **All resources** in the **Azure services** section, then select your Azure Maps account. To view your Azure Maps authentication details: ## Choose an authentication category -Depending on your application needs, there are specific pathways to application security. Azure AD defines specific authentication categories to support a wide range of authentication flows. To choose the best category for your application, see [application categories](../active-directory/develop/authentication-flows-app-scenarios.md#application-categories). +Depending on your application needs, there are specific pathways to application security. Azure AD defines specific authentication categories to support a wide range of authentication flows. To choose the best category for your application, see [application categories]. > [!NOTE] > Understanding categories and scenarios will help you secure your Azure Maps application, whether you use Azure Active Directory or shared key authentication. ## How to add and remove managed identities -To enable [Shared access signature (SAS) token authentication](./azure-maps-authentication.md#shared-access-signature-token-authentication) with the Azure Maps REST API you need to add a user-assigned managed identity to your Azure Maps account. +To enable [Shared access signature (SAS) token authentication] with the Azure Maps REST API, you need to add a user-assigned managed identity to your Azure Maps account. ### Create a managed identity -You can create a user-assigned managed identity before or after creating a map account. You can add the managed identity through the portal, Azure management SDKs, or the Azure Resource Manager (ARM) template. To add a user-assigned managed identity through an ARM template, specify the resource identifier of the user-assigned managed identity. See example below: +You can create a user-assigned managed identity before or after creating a map account. You can add the managed identity through the portal, Azure management SDKs, or the Azure Resource Manager (ARM) template. To add a user-assigned managed identity through an ARM template, specify the resource identifier of the user-assigned managed identity. ```json "identity": { You can create a user-assigned managed identity before or after creating a map a You can remove a system-assigned identity by disabling the feature through the portal or the Azure Resource Manager template in the same way that it was created. User-assigned identities can be removed individually. To remove all identities, set the identity type to `"None"`. -Removing a system-assigned identity in this way will also delete it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the Azure Maps account is deleted. +Removing a system-assigned identity in this way also deletes it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the Azure Maps account is deleted. To remove all identities by using the Azure Resource Manager template, update this section: To remove all identities by using the Azure Resource Manager template, update th ## Choose an authentication and authorization scenario -This table outlines common authentication and authorization scenarios in Azure Maps. Each scenario describes a type of app which can be used to access Azure Maps REST API. Use the links to learn detailed configuration information for each scenario. +This table outlines common authentication and authorization scenarios in Azure Maps. Each scenario describes a type of app that can be used to access Azure Maps REST API. Use the links to learn detailed configuration information for each scenario. > [!IMPORTANT] > For production applications, we recommend implementing Azure AD with Azure role-based access control (Azure RBAC). -| Scenario | Authentication | Authorization | Development effort | Operational effort | -| -- | -- | - | | | -| [Trusted daemon app or non-interactive client app](./how-to-secure-daemon-app.md) | Shared Key | N/A | Medium | High | -| [Trusted daemon or non-interactive client app](./how-to-secure-daemon-app.md) | Azure AD | High | Low | Medium | -| [Web single page app with interactive single-sign-on](./how-to-secure-spa-users.md) | Azure AD | High | Medium | Medium | -| [Web single page app with non-interactive sign-on](./how-to-secure-spa-app.md) | Azure AD | High | Medium | Medium | -| [Web app, daemon app, or non-interactive sign-on app](./how-to-secure-sas-app.md) | SAS Token | High | Medium | Low | -| [Web application with interactive single-sign-on](./how-to-secure-webapp-users.md) | Azure AD | High | High | Medium | -| [IoT device or an input constrained application](./how-to-secure-device-code.md) | Azure AD | High | Medium | Medium | +| Scenario | Authentication | Authorization | Development effort | Operational effort | +| --| -- | - | | | +| [Trusted daemon app or non-interactive client app] | Shared Key | N/A | Medium | High | +| [Trusted daemon or non-interactive client app] | Azure AD | High | Low | Medium | +| [Web single page app with interactive single-sign-on]| Azure AD | High | Medium | Medium | +| [Web single page app with non-interactive sign-on] | Azure AD | High | Medium | Medium | +| [Web app, daemon app, or non-interactive sign-on app]| SAS Token | High | Medium | Low | +| [Web application with interactive single-sign-on] | Azure AD | High | High | Medium | +| [IoT device or an input constrained application] | Azure AD | High | Medium | Medium | ## View built-in Azure Maps role definitions Request a token from the Azure AD token endpoint. In your Azure AD request, use | Azure public cloud | `https://login.microsoftonline.com` | `https://atlas.microsoft.com/` | | Azure Government cloud | `https://login.microsoftonline.us` | `https://atlas.microsoft.com/` | -For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md). To view specific scenarios, see [the table of scenarios](./how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario). +For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD]. To view specific scenarios, see [the table of scenarios]. ## Manage and rotate shared keys Your Azure Maps subscription keys are similar to a root password for your Azure ### Manually rotate subscription keys -To help keep your Azure Maps account secure, we recommend periodically rotating your subscription keys. If possible, use Azure Key Vault to manage your access keys. If you aren't using Key Vault, you'll need to manually rotate your keys. +To help keep your Azure Maps account secure, we recommend periodically rotating your subscription keys. If possible, use Azure Key Vault to manage your access keys. If you aren't using Key Vault, you need to manually rotate your keys. Two subscription keys are assigned so that you can rotate your keys. Having two keys ensures that your application maintains access to Azure Maps throughout the process. To rotate your Azure Maps subscription keys in the Azure portal: 1. Update your application code to reference the secondary key for the Azure Maps account and deploy.-2. In the [Azure portal](https://portal.azure.com/), navigate to your Azure Maps account. +2. In the [Azure portal], navigate to your Azure Maps account. 3. Under **Settings**, select **Authentication**. 4. To regenerate the primary key for your Azure Maps account, select the **Regenerate** button next to the primary key. 5. Update your application code to reference the new primary key and deploy. To rotate your Azure Maps subscription keys in the Azure portal: Find the API usage metrics for your Azure Maps account: > [!div class="nextstepaction"]-> [View usage metrics](how-to-view-api-usage.md) +> [View usage metrics] Explore samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]-> [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples) +> [Azure AD authentication samples] ++[Azure portal]: https://portal.azure.com/ +[Azure AD authentication samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples +[View usage metrics]: how-to-view-api-usage.md +[Authentication scenarios for Azure AD]: ../active-directory/develop/authentication-vs-authorization.md +[the table of scenarios]: how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario +[Trusted daemon app or non-interactive client app]: how-to-secure-daemon-app.md +[Trusted daemon or non-interactive client app]: how-to-secure-daemon-app.md +[Web single page app with interactive single-sign-on]: how-to-secure-spa-users.md +[Web single page app with non-interactive sign-on]: how-to-secure-spa-app.md +[Web app, daemon app, or non-interactive sign-on app]: how-to-secure-sas-app.md +[Web application with interactive single-sign-on]: how-to-secure-webapp-users.md +[IoT device or an input constrained application]: how-to-secure-device-code.md +[Shared access signature (SAS) token authentication]: azure-maps-authentication.md#shared-access-signature-token-authentication +[application categories]: ../active-directory/develop/authentication-flows-app-scenarios.md#application-categories +[Azure Active Directory (Azure AD)]: ../active-directory/fundamentals/active-directory-whatis.md +[Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication +[free account]: https://azure.microsoft.com/free/ +[managed identities for Azure resources]: ../active-directory/managed-identities-azure-resources/overview.md +[Managed identity types]: ../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types +[An Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[Azure Maps Authentication]: azure-maps-authentication.md |
azure-maps | How To Manage Creator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md | Title: Manage Microsoft Azure Maps Creator -description: In this article, you'll learn how to manage Microsoft Azure Maps Creator. +description: This article demonstrates how to manage Microsoft Azure Maps Creator. Last updated 01/20/2022-You can use Azure Maps Creator to create private indoor map data. Using the Azure Maps API and the Indoor Maps module, you can develop interactive and dynamic indoor map web applications. For pricing information, see the *Creator* section in [Azure Maps pricing](https://aka.ms/CreatorPricing). +You can use Azure Maps Creator to create private indoor map data. Using the Azure Maps API and the Indoor Maps module, you can develop interactive and dynamic indoor map web applications. For pricing information, see the *Creator* section in [Azure Maps pricing]. This article takes you through the steps to create and delete a Creator resource in an Azure Maps account. ## Create Creator resource -1. Sign in to the [Azure portal](https://portal.azure.com) +1. Sign in to the [Azure portal]. 2. Navigate to the Azure portal menu. Select **All resources**, and then select your Azure Maps account. To delete the Creator resource: :::image type="content" source="./media/how-to-manage-creator/creator-delete.png" alt-text="A screenshot of the Azure Maps Creator Resource page with the delete button highlighted."::: -3. You'll be asked to confirm deletion by typing in the name of your Creator resource. After the resource is deleted, you see a confirmation page that looks like the following: +3. You're prompted to confirm deletion by typing in the name of your Creator resource. After the resource is deleted, you see a confirmation page that looks like the following example: :::image type="content" source="./media/how-to-manage-creator/creator-confirm-delete.png" alt-text="A screenshot of the Azure Maps Creator Resource deletion confirmation page."::: To delete the Creator resource: Creator inherits Azure Maps Access Control (IAM) settings. All API calls for data access must be sent with authentication and authorization rules. -Creator usage data is incorporated in your Azure Maps usage charts and activity log. For more information, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md). +Creator usage data is incorporated in your Azure Maps usage charts and activity log. For more information, see [Manage authentication in Azure Maps]. >[!Important] >We recommend using: >-> * Azure Active Directory (Azure AD) in all solutions that are built with an Azure Maps account using Creator services. For more information, on Azure AD, see [Azure AD authentication](azure-maps-authentication.md#azure-ad-authentication). +> * Azure Active Directory (Azure AD) in all solutions that are built with an Azure Maps account using Creator services. For more information, on Azure AD, see [Azure AD authentication]. >->* Role-based access control settings (RBAC). Using these settings, map makers can act as the Azure Maps Data Contributor role, and Creator map data users can act as the Azure Maps Data Reader role. For more information, see [Authorization with role-based access control](azure-maps-authentication.md#authorization-with-role-based-access-control). +>* Role-based access control settings (RBAC). Using these settings, map makers can act as the Azure Maps Data Contributor role, and Creator map data users can act as the Azure Maps Data Reader role. For more information, see [Authorization with role-based access control]. ## Access to Creator services -Creator services and services that use data hosted in Creator (for example, Render service), are accessible at a geographical URL. The geographical URL is determined by the location selected during creation. For example, if Creator is created in a region in the United States geographical location, all calls to the Conversion service must be submitted to `us.atlas.microsoft.com/conversions`. To view mappings of region to geographical location, [see Creator service geographic scope](creator-geographic-scope.md). +Creator services and services that use data hosted in Creator (for example, Render service), are accessible at a geographical URL. The geographical URL determines the location selected during creation. For example, if Creator is created in a region in the United States geographical location, all calls to the Conversion service must be submitted to `us.atlas.microsoft.com/conversions`. To view mappings of region to geographical location, [see Creator service geographic scope]. Also, all data imported into Creator should be uploaded into the same geographical location as the Creator resource. For example, if Creator is provisioned in the United States, all raw data should be uploaded via `us.atlas.microsoft.com/mapData/upload`. Also, all data imported into Creator should be uploaded into the same geographic Introduction to Creator services for indoor mapping: > [!div class="nextstepaction"]-> [Data upload](creator-indoor-maps.md#upload-a-drawing-package) +> [Data upload] > [!div class="nextstepaction"]-> [Data conversion](creator-indoor-maps.md#convert-a-drawing-package) +> [Data conversion] > [!div class="nextstepaction"]-> [Dataset](creator-indoor-maps.md#datasets) +> [Dataset] > [!div class="nextstepaction"]-> [Tileset](creator-indoor-maps.md#tilesets) +> [Tileset] > [!div class="nextstepaction"]-> [Feature State set](creator-indoor-maps.md#feature-statesets) +> [Feature State set] Learn how to use the Creator services to render indoor maps in your application: > [!div class="nextstepaction"]-> [Azure Maps Creator tutorial](tutorial-creator-indoor-maps.md) +> [Azure Maps Creator tutorial] > [!div class="nextstepaction"]-> [Indoor map dynamic styling](indoor-map-dynamic-styling.md) +> [Indoor map dynamic styling] > [!div class="nextstepaction"]-> [Use the Indoor Maps module](how-to-use-indoor-module.md) +> [Use the Indoor Maps module] ++[Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control +[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication +[Azure Maps Creator tutorial]: tutorial-creator-indoor-maps.md +[Azure Maps pricing]: https://aka.ms/CreatorPricing +[Azure portal]: https://portal.azure.com +[Data conversion]: creator-indoor-maps.md#convert-a-drawing-package +[Data upload]: creator-indoor-maps.md#upload-a-drawing-package +[Dataset]: creator-indoor-maps.md#datasets +[Feature State set]: creator-indoor-maps.md#feature-statesets +[Indoor map dynamic styling]: indoor-map-dynamic-styling.md +[Manage authentication in Azure Maps]: how-to-manage-authentication.md +[see Creator service geographic scope]: creator-geographic-scope.md +[Tileset]: creator-indoor-maps.md#tilesets +[Use the Indoor Maps module]: how-to-use-indoor-module.md |
azure-maps | Web Sdk Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md | Title: Azure Maps Web SDK best practices description: Learn tips & tricks to optimize your use of the Azure Maps Web SDK. -- Previously updated : 11/29/2021++ Last updated : 04/13/2023 Generally, when looking to improve performance of the map, look for ways to redu ## Security best practices -For security best practices, see [Authentication and authorization best practices](authentication-best-practices.md). +For more information on security best practices, see [Authentication and authorization best practices]. ### Use the latest versions of Azure Maps -The Azure Maps SDKs go through regular security testing along with any external dependency libraries that may be used by the SDKs. Any known security issue is fixed in a timely manner and released to production. If your application points to the latest major version of the hosted version of the Azure Maps Web SDK, it will automatically receive all minor version updates that will include security related fixes. +The Azure Maps SDKs go through regular security testing along with any external dependency libraries used by the SDKs. Any known security issue is fixed in a timely manner and released to production. If your application points to the latest major version of the hosted version of the Azure Maps Web SDK, it automatically receives all minor version updates that include security related fixes. -If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the caret (^) symbol to in combination with the Azure Maps npm package version number in your `package.json` file so that it will always point to the latest minor version. +If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the caret (^) symbol to in combination with the Azure Maps npm package version number in your `package.json` file so that it points to the latest minor version. ```json "dependencies": {- "azure-maps-control": "^2.0.30" + "azure-maps-control": "^2.2.6" } ``` +> [!TIP] +> Always use the latest version of the npm Azure Maps Control. For more information, see [azure-maps-control] in the npm documentation. + ## Optimize initial map load When a web page is loading, one of the first things you want to do is start rendering something as soon as possible so that the user isn't staring at a blank screen. ### Watch the maps ready event -Similarly, when the map initially loads often it is desired to load data on it as quickly as possible, so the user isn't looking at an empty map. Since the map loads resources asynchronously, you have to wait until the map is ready to be interacted with before trying to render your own data on it. There are two events you can wait for, a `load` event and a `ready` event. The load event will fire after the map has finished completely loading the initial map view and every map tile has loaded. The ready event will fire when the minimal map resources needed to start interacting with the map. The ready event can often fire in half the time of the load event and thus allow you to start loading your data into the map sooner. +Similarly, when the map initially loads often it's desired to load data on it as quickly as possible, so the user isn't looking at an empty map. Since the map loads resources asynchronously, you have to wait until the map is ready to be interacted with before trying to render your own data on it. There are two events you can wait for, a `load` event and a `ready` event. The load event will fire after the map has finished completely loading the initial map view and every map tile has loaded. The ready event fires when the minimal map resources needed to start interacting with the map. The ready event can often fire in half the time of the load event and thus allow you to start loading your data into the map sooner. ### Lazy load the Azure Maps Web SDK -If the map isn't needed right away, lazy load the Azure Maps Web SDK until it is needed. This delays the loading of the JavaScript and CSS files used by the Azure Maps Web SDK until needed. A common scenario where this occurs is when the map is loaded in a tab or flyout panel that isn't displayed on page load. +If the map isn't needed right away, lazy load the Azure Maps Web SDK until it's needed. This delays the loading of the JavaScript and CSS files used by the Azure Maps Web SDK until needed. A common scenario where this occurs is when the map is loaded in a tab or flyout panel that isn't displayed on page load. The following code sample shows how to delay the loading the Azure Maps Web SDK until a button is pressed. <br/> The following code sample shows how to delay the loading the Azure Maps Web SDK ### Add a placeholder for the map -If the map takes a while to load due to network limitations or other priorities within your application, consider adding a small background image to the map `div` as a placeholder for the map. This fills the void of the map `div` while it is loading. +If the map takes a while to load due to network limitations or other priorities within your application, consider adding a small background image to the map `div` as a placeholder for the map. This fills the void of the map `div` while it's loading. ### Set initial map style and camera options on initialization -Often apps want to load the map to a specific location or style. Sometimes developers will wait until the map has loaded (or wait for the `ready` event), and then use the `setCemer` or `setStyle` functions of the map. This often takes longer to get to the desired initial map view since many resources end up being loaded by default before the resources needed for the desired map view are loaded. A better approach is to pass in the desired map camera and style options into the map when initializing it. +Often apps want to load the map to a specific location or style. Sometimes developers wait until the map has loaded (or wait for the `ready` event), and then use the `setCamera` or `setStyle` functions of the map. This often takes longer to get to the desired initial map view since many resources end up being loaded by default before the resources needed for the desired map view are loaded. A better approach is to pass in the desired map camera and style options into the map when initializing it. ## Optimize data sources The Web SDK has two data sources, -* **GeoJSON source**: Known as the `DataSource` class, manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features). -* **Vector tile source**: Known at the `VectorTileSource` class, loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features). +* **GeoJSON source**: The `DataSource` class, manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features). +* **Vector tile source**: The `VectorTileSource` class, loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features). ### Use tile-based solutions for large datasets If working with larger datasets containing millions of features, the recommended way to achieve optimal performance is to expose the data using a server-side solution such as vector or raster image tile service. Vector tiles are optimized to load only the data that is in view with the geometries clipped to the focus area of the tile and generalized to match the resolution of the map for the zoom level of the tile. -The [Azure Maps Creator platform](creator-indoor-maps.md) provides the ability to retrieve data in vector tile format. Other data formats can be using tools such as [Tippecanoe](https://github.com/mapbox/tippecanoe) or one of the many [resources list on this page](https://github.com/mapbox/awesome-vector-tiles). +The [Azure Maps Creator platform] retrieves data in vector tile format. Other data formats can be using tools such as [Tippecanoe]. For more information on working with vector tiles, see the Mapbox [awesome-vector-tiles] readme in GitHub. -It is also possible to create a custom service that renders datasets as raster image tiles on the server-side and load the data using the TileLayer class in the map SDK. This provides exceptional performance as the map only needs to load and manage a few dozen images at most. However, there are some limitations with using raster tiles since the raw data is not available locally. A secondary service is often required to power any type of interaction experience, for example, find out what shape a user clicked on. Additionally, the file size of a raster tile is often larger than a compressed vector tile that contains generalized and zoom level optimized geometries. +It's also possible to create a custom service that renders datasets as raster image tiles on the server-side and load the data using the TileLayer class in the map SDK. This provides exceptional performance as the map only needs to load and manage a few dozen images at most. However, there are some limitations with using raster tiles since the raw data isn't available locally. A secondary service is often required to power any type of interaction experience, for example, find out what shape a user clicked on. Additionally, the file size of a raster tile is often larger than a compressed vector tile that contains generalized and zoom level optimized geometries. -Learn more about data sources in the [Create a data source](create-data-source-web-sdk.md) document. +For more information about data sources, see [Create a data source]. ### Combine multiple datasets into a single vector tile source -The less data sources the map has to manage, the faster it can process all features to be displayed. In particular, when it comes to tile sources, combining two vector tile sources together cuts the number of HTTP requests to retrieve the tiles in half, and the total amount of data would be slightly smaller since there is only one file header. +The less data sources the map has to manage, the faster it can process all features to be displayed. In particular, when it comes to tile sources, combining two vector tile sources together cuts the number of HTTP requests to retrieve the tiles in half, and the total amount of data would be slightly smaller since there's only one file header. -Combining multiple data sets in a single vector tile source can be achieved using a tool such as [Tippecanoe](https://github.com/mapbox/tippecanoe). Data sets can be combined into a single feature collection or separated into separate layers within the vector tile known as source-layers. When connecting a vector tile source to a rendering layer, you would specify the source-layer that contains the data that you want to render with the layer. +Combining multiple data sets in a single vector tile source can be achieved using a tool such as [Tippecanoe]. Data sets can be combined into a single feature collection or separated into separate layers within the vector tile known as source-layers. When connecting a vector tile source to a rendering layer, you would specify the source-layer that contains the data that you want to render with the layer. ### Reduce the number of canvas refreshes due to data updates -There are several ways data in a `DataSource` class can be added or updated. Listed below are the different methods and some considerations to ensure good performance. +There are several ways data in a `DataSource` class can be added or updated. The following list shows the different methods and some considerations to ensure good performance. -* The data sources `add` function can be used to add one or more features to a data source. Each time this function is called it will trigger a map canvas refresh. If adding many features, combine them into an array or feature collection and passing them into this function once, rather than looping over a data set and calling this function for each feature. -* The data sources `setShapes` function can be used to overwrite all shapes in a data source. Under the hood, it combines the data sources `clear` and `add` functions together and does a single map canvas refresh instead of two, which is much faster. Be sure to use this when you want to update all data in a data source. -* The data sources `importDataFromUrl` function can be used to load a GeoJSON file via a URL into a data source. Once the data has been downloaded, it is passed into the data sources `add` function. If the GeoJSON file is hosted on a different domain, be sure that the other domain supports cross domain requests (CORs). If it doesn't consider copying the data to a local file on your domain or creating a proxy service that has CORs enabled. If the file is large, consider converting it into a vector tile source. -* If features are wrapped with the `Shape` class, the `addProperty`, `setCoordinates`, and `setProperties` functions of the shape will all trigger an update in the data source and a map canvas refresh. All features returned by the data sources `getShapes` and `getShapeById` functions are automatically wrapped with the `Shape` class. If you want to update several shapes, it is faster to convert them to JSON using the data sources `toJson` function, editing the GeoJSON, then passing this data into the data sources `setShapes` function. +* The data sources `add` function can be used to add one or more features to a data source. Each time this function is called it triggers a map canvas refresh. If adding many features, combine them into an array or feature collection and passing them into this function once, rather than looping over a data set and calling this function for each feature. +* The data sources `setShapes` function can be used to overwrite all shapes in a data source. Under the hood, it combines the data sources `clear` and `add` functions together and does a single map canvas refresh instead of two, which is faster. Be sure to use this function when you want to update all data in a data source. +* The data sources `importDataFromUrl` function can be used to load a GeoJSON file via a URL into a data source. Once the data has been downloaded, it's passed into the data sources `add` function. If the GeoJSON file is hosted on a different domain, be sure that the other domain supports cross domain requests (CORs). If it doesn't consider copying the data to a local file on your domain or creating a proxy service that has CORs enabled. If the file is large, consider converting it into a vector tile source. +* If features are wrapped with the `Shape` class, the `addProperty`, `setCoordinates`, and `setProperties` functions of the shape all trigger an update in the data source and a map canvas refresh. All features returned by the data sources `getShapes` and `getShapeById` functions are automatically wrapped with the `Shape` class. If you want to update several shapes, it's faster to convert them to JSON using the data sources `toJson` function, editing the GeoJSON, then passing this data into the data sources `setShapes` function. ### Avoid calling the data sources clear function unnecessarily Calling the clear function of the `DataSource` class causes a map canvas refresh. If the `clear` function is called multiple times in a row, a delay can occur while the map waits for each refresh to occur. -A common scenario where this often appears in applications is when an app clears the data source, downloads new data, clears the data source again then adds the new data to the data source. Depending on the desired user experience, the following alternatives would be better. +This is a common scenario in applications that clear the data source, download new data, clear the data source again, then adds the new data to the data source. Depending on the desired user experience, the following alternatives would be better. -* Clear the data before downloading the new data, then pass the new data into the data sources `add` or `setShapes` function. If this is the only data set on the map, the map will be empty while the new data is downloading. -* Download the new data, then pass it into the data sources `setShapes` function. This will replace all the data on the map. +* Clear the data before downloading the new data, then pass the new data into the data sources `add` or `setShapes` function. If this is the only data set on the map, the map is empty while the new data is downloading. +* Download the new data, then pass it into the data sources `setShapes` function. This replaces all the data on the map. ### Remove unused features and properties If your dataset contains features that aren't going to be used in your app, remo * Reduces the number of features that need to be looped through when rendering the data. * Can sometimes help simplify or remove data-driven expressions and filters, which mean less processing required at render time. -When features have numerous properties or content, it is much more performant to limit what gets added to the data source to just those needed for rendering and to have a separate method or service for retrieving the additional property or content when needed. For example, if you have a simple map displaying locations on a map when clicked a bunch of detailed content is displayed. If you want to use data driven styling to customize how the locations are rendered on the map, only load the properties needed into the data source. When you want to display the detailed content, use the ID of the feature to retrieve the additional content separately. If the content is stored on the server-side, a service can be used to retrieve it asynchronously, which would drastically reduce the amount of data that needs to be downloaded when the map is initially loaded. +When features have numerous properties or content, it's much more performant to limit what gets added to the data source to just those needed for rendering and to have a separate method or service for retrieving the other property or content when needed. For example, if you have a simple map displaying locations on a map when clicked a bunch of detailed content is displayed. If you want to use data driven styling to customize how the locations are rendered on the map, only load the properties needed into the data source. When you want to display the detailed content, use the ID of the feature to retrieve the other content separately. If the content is stored on the server, you can reduce the amount of data that needs to be downloaded when the map is initially loaded by using a service to retrieve it asynchronously. -Additionally, reducing the number of significant digits in the coordinates of features can also significantly reduce the data size. It is not uncommon for coordinates to contain 12 or more decimal places; however, six decimal places have an accuracy of about 0.1 meter, which is often more precise than the location the coordinate represents (six decimal places is recommended when working with small location data such as indoor building layouts). Having any more than six decimal places will likely make no difference in how the data is rendered and will only require the user to download more data for no added benefit. +Additionally, reducing the number of significant digits in the coordinates of features can also significantly reduce the data size. It isn't uncommon for coordinates to contain 12 or more decimal places; however, six decimal places have an accuracy of about 0.1 meter, which is often more precise than the location the coordinate represents (six decimal places is recommended when working with small location data such as indoor building layouts). Having any more than six decimal places will likely make no difference in how the data is rendered and requires the user to download more data for no added benefit. -Here is a list of [useful tools for working with GeoJSON data](https://github.com/tmcw/awesome-geojson). +Here's a list of [useful tools for working with GeoJSON data]. ### Use a separate data source for rapidly changing data -Sometimes there is a need to rapidly update data on the map for things such as showing live updates of streaming data or animating features. When a data source is updated, the rendering engine will loop through and render all features in the data source. Separating static data from rapidly changing data into different data sources can significantly reduce the number of features that are re-rendered on each update to the data source and improve overall performance. +Sometimes there's a need to rapidly update data on the map for things such as showing live updates of streaming data or animating features. When a data source is updated, the rendering engine loops through and render all features in the data source. Improve overall performance by separating static from rapidly changing data into different data sources, reducing the number of features re-rendered during each update. If using vector tiles with live data, an easy way to support updates is to use the `expires` response header. By default, any vector tile source or raster tile layer will automatically reload tiles when the `expires` date. The traffic flow and incident tiles in the map use this feature to ensure fresh real-time traffic data is displayed on the map. This feature can be disabled by setting the maps `refreshExpiredTiles` service option to `false`. If using vector tiles with live data, an easy way to support updates is to use t The `DataSource` class converts raw location data into vector tiles local for on-the-fly rendering. These local vector tiles clip the raw data to the bounds of the tile area with a bit of buffer to ensure smooth rendering between tiles. The smaller the `buffer` option is, the fewer overlapping data is stored in the local vector tiles and the better performance, however, the greater the change of rendering artifacts occurring. Try tweaking this option to get the right mix of performance with minimal rendering artifacts. -The `DataSource` class also has a `tolerance` option that is used with the Douglas-Peucker simplification algorithm when reducing the resolution of geometries for rendering purposes. Increasing this tolerance value will reduce the resolution of geometries and in turn improve performance. Tweak this option to get the right mix of geometry resolution and performance for your data set. +The `DataSource` class also has a `tolerance` option that is used with the Douglas-Peucker simplification algorithm when reducing the resolution of geometries for rendering purposes. Increasing this tolerance value reduces the resolution of geometries and in turn improve performance. Tweak this option to get the right mix of geometry resolution and performance for your data set. ### Set the max zoom option of GeoJSON data sources -The `DataSource` class converts raw location data into vector tiles local for on-the-fly rendering. By default, it will do this until zoom level 18, at which point, when zoomed in closer, it will sample data from the tiles generated for zoom level 18. This works well for most data sets that need to have high resolution when zoomed in at these levels. However, when working with data sets that are more likely to be viewed when zoomed out more, such as when viewing state or province polygons, setting the `minZoom` option of the data source to a smaller value such as `12` will reduce the amount computation, local tile generation that occurs, and memory used by the data source and increase performance. +The `DataSource` class converts raw location data into vector tiles local for on-the-fly rendering. By default, it does this until zoom level 18, at which point, when zoomed in closer, it samples data from the tiles generated for zoom level 18. This works well for most data sets that need to have high resolution when zoomed in at these levels. However, when working with data sets that are more likely to be viewed when zoomed out more, such as when viewing state or province polygons, setting the `minZoom` option of the data source to a smaller value such as `12` reduces the amount computation, local tile generation that occurs, and memory used by the data source and increase performance. ### Minimize GeoJSON response When loading GeoJSON data from a server either through a service or by loading a ### Access raw GeoJSON using a URL -It is possible to store GeoJSON objects inline inside of JavaScript, however this will use a lot of memory as copies of it will be stored across the variable you created for this object and the data source instance, which manages it within a separate web worker. Expose the GeoJSON to your app using a URL instead and the data source will load a single copy of data directly into the data sources web worker. +It's possible to store GeoJSON objects inline inside of JavaScript, however this uses more memory as copies of it are stored across the variable you created for this object and the data source instance, which manages it within a separate web worker. Expose the GeoJSON to your app using a URL instead and the data source loads a single copy of data directly into the data sources web worker. ## Optimize rendering layers Azure maps provides several different layers for rendering data on a map. There ### Create layers once and reuse them -The Azure Maps Web SDK is decided to be data driven. Data goes into data sources, which are then connected to rendering layers. If you want to change the data on the map, update the data in the data source or change the style options on a layer. This is often much faster than removing and then recreating layers whenever there is a change. +The Azure Maps Web SDK is data driven. Data goes into data sources, which are then connected to rendering layers. If you want to change the data on the map, update the data in the data source or change the style options on a layer. This is often faster than removing, then recreating layers with every change. ### Consider bubble layer over symbol layer -The bubble layer renders points as circles on the map and can easily have their radius and color styled using a data-driven expression. Since the circle is a simple shape for WebGL to draw, the rendering engine will be able to render these much faster than a symbol layer, which has to load and render an image. The performance difference of these two rendering layers is noticeable when rendering tens of thousands of points. +The bubble layer renders points as circles on the map and can easily have their radius and color styled using a data-driven expression. Since the circle is a simple shape for WebGL to draw, the rendering engine is able to render these faster than a symbol layer, which has to load and render an image. The performance difference of these two rendering layers is noticeable when rendering tens of thousands of points. ### Use HTML markers and Popups sparingly -Unlike most layers in the Azure Maps Web control that use WebGL for rendering, HTML Markers and Popups use traditional DOM elements for rendering. As such, the more HTML markers and Popups added a page, the more DOM elements there are. Performance can degrade after adding a few hundred HTML markers or popups. For larger data sets, consider either clustering your data or using a symbol or bubble layer. For popups, a common strategy is to create a single popup and reuse it by updating its content and position as shown in the below example: +Unlike most layers in the Azure Maps Web control that use WebGL for rendering, HTML Markers and Popups use traditional DOM elements for rendering. As such, the more HTML markers and Popups added a page, the more DOM elements there are. Performance can degrade after adding a few hundred HTML markers or popups. For larger data sets, consider either clustering your data or using a symbol or bubble layer. For popups, a common strategy is to create a single popup and reuse it by updating its content and position as shown in the following example: <br/> That said, if you only have a few points to render on the map, the simplicity of ### Combine layers -The map is capable of rendering hundreds of layers, however, the more layers there are, the more time it takes to render a scene. One strategy to reduce the number of layers is to combine layers that have similar styles or can be styled using a [data-driven styles](data-driven-style-expressions-web-sdk.md). +The map is capable of rendering hundreds of layers, however, the more layers there are, the more time it takes to render a scene. One strategy to reduce the number of layers is to combine layers that have similar styles or can be styled using a [data-driven styles]. -For example, consider a data set where all features have a `isHealthy` property that can have a value of `true` or `false`. If creating a bubble layer that renders different colored bubbles based on this property, there are several ways to do this as listed below from least performant to most performant. +For example, consider a data set where all features have a `isHealthy` property that can have a value of `true` or `false`. If creating a bubble layer that renders different colored bubbles based on this property, there are several ways to do this as shown in the following list, from least performant to most performant. * Split the data into two data sources based on the `isHealthy` value and attach a bubble layer with a hard-coded color option to each data source.-* Put all the data into a single data source and create two bubble layers with a hard-coded color option and a filter based on the `isHealthy` property. -* Put all the data into a single data source, create a single bubble layer with a `case` style expression for the color option based on the `isHealthy` property. Here is a code sample that demonstrates this. +* Put all the data into a single data source and create two bubble layers with a hard-coded color option and a filter based on the `isHealthy` property. +* Put all the data into a single data source, create a single bubble layer with a `case` style expression for the color option based on the `isHealthy` property. Here's a code sample that demonstrates this. ```javascript var layer = new atlas.layer.BubbleLayer(source, null, { var layer = new atlas.layer.BubbleLayer(source, null, { Symbol layers have collision detection enabled by default. This collision detection aims to ensure that no two symbols overlap. The icon and text options of a symbol layer have two options, -* `allowOverlap` - specifies if the symbol will be visible if it collides with other symbols. +* `allowOverlap` - specifies if the symbol is visible when it collides with other symbols. * `ignorePlacement` - specifies if the other symbols are allowed to collide with the symbol. -Both of these options are set to `false` by default. When animating a symbol, the collision detection calculations will run on each frame of the animation, which can slow down the animation and make it look less fluid. To smooth out the animation, set these options to `true`. +Both of these options are set to `false` by default. When animating a symbol, the collision detection calculations run on each frame of the animation, which can slow down the animation and make it look less fluid. To smooth out the animation, set these options to `true`. The following code sample a simple way to animate a symbol layer. If your data meets one of the following criteria, be sure to specify the min and * If the data is coming from a vector tile source, often source layers for different data types are only available through a range of zoom levels. * If using a tile layer that doesn't have tiles for all zoom levels 0 through 24 and you want it to only rendering at the levels it has tiles, and not try to fill in missing tiles with tiles from other zoom levels. * If you only want to render a layer at certain zoom levels.-All layers have a `minZoom` and `maxZoom` option where the layer will be rendered when between these zoom levels based on this logic `maxZoom > zoom >= minZoom`. +All layers have a `minZoom` and `maxZoom` option where the layer is rendered when between these zoom levels based on this logic `maxZoom > zoom >= minZoom`. **Example** var layer = new atlas.layer.BubbleLayer(dataSource, null, { ### Specify tile layer bounds and source zoom range -By default, tile layers will load tiles across the whole globe. However, if the tile service only has tiles for a certain area the map will try to load tiles when outside of this area. When this happens, a request for each tile will be made and wait for a response that can block other requests being made by the map and thus slow down the rendering of other layers. Specifying the bounds of a tile layer will result in the map only requesting tiles that are within that bounding box. Also, if the tile layer is only available between certain zoom levels, specify the min and max source zoom for the same reason. +By default, tile layers load tiles across the whole globe. However, if the tile service only has tiles for a certain area the map tries to load tiles when outside of this area. When this happens, a request for each tile is made and wait for a response that can block other requests being made by the map and thus slow down the rendering of other layers. Specifying the bounds of a tile layer results in the map only requesting tiles that are within that bounding box. Also, if the tile layer is only available between certain zoom levels, specify the min and max source zoom for the same reason. **Example** var tileLayer = new atlas.layer.TileLayer({ ### Use a blank map style when base map not visible -If a layer is being overlaid on the map that will completely cover the base map, consider setting the map style to `blank` or `blank_accessible` so that the base map isn't rendered. A common scenario for doing this is when overlaying a full globe tile at has no opacity or transparent area above the base map. +If a layer is overlaid on the map that completely covers the base map, consider setting the map style to `blank` or `blank_accessible` so that the base map isn't rendered. A common scenario for doing this is when overlaying a full globe tile at has no opacity or transparent area above the base map. ### Smoothly animate image or tile layers -If you want to animate through a series of image or tile layers on the map. It is often faster to create a layer for each image or tile layer and to change the opacity than to update the source of a single layer on each animation frame. Hiding a layer by setting the opacity to zero and showing a new layer by setting its opacity to a value greater than zero is much faster than updating the source in the layer. Alternatively, the visibility of the layers can be toggled, but be sure to set the fade duration of the layer to zero, otherwise it will animate the layer when displaying it, which will cause a flicker effect since the previous layer would have been hidden before the new layer is visible. +If you want to animate through a series of image or tile layers on the map. It's often faster to create a layer for each image or tile layer and to change the opacity than to update the source of a single layer on each animation frame. Hiding a layer by setting the opacity to zero and showing a new layer by setting its opacity to a value greater than zero is faster than updating the source in the layer. Alternatively, the visibility of the layers can be toggled, but be sure to set the fade duration of the layer to zero, otherwise it animates the layer when displaying it, which causes a flicker effect since the previous layer would have been hidden before the new layer is visible. ### Tweak Symbol layer collision detection logic -The symbol layer has two options that exist for both icon and text called `allowOverlap` and `ignorePlacement`. These two options specify if the icon or text of a symbol can overlap or be overlapped. When these are set to `false`, the symbol layer will do calculations when rendering each point to see if it collides with any other already rendered symbol in the layer, and if it does, will not render the colliding symbol. This is good at reducing clutter on the map and reducing the number of objects rendered. By setting these options to `false`, this collision detection logic will be skipped, and all symbols will be rendered on the map. Tweak this option to get the best combination of performance and user experience. +The symbol layer has two options that exist for both icon and text called `allowOverlap` and `ignorePlacement`. These two options specify if the icon or text of a symbol can overlap or be overlapped. When these are set to `false`, the symbol layer does calculations when rendering each point to see if it collides with any other already rendered symbol in the layer, and if it does, don't render the colliding symbol. This is good at reducing clutter on the map and reducing the number of objects rendered. By setting these options to `false`, this collision detection logic is skipped, and all symbols are rendered on the map. Tweak this option to get the best combination of performance and user experience. ### Cluster large point data sets -When working with large sets of data points you may find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms the map in, clusters will break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options. +When working with large sets of data points you may find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms in the map, clusters break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options. -Additionally, increase the size of the cluster radius to improve performance. The larger the cluster radius, the less clustered points there is to keep track of and render. -Learn more in the [Clustering point data document](clustering-point-data-web-sdk.md) +Additionally, increase the size of the cluster radius to improve performance. The larger the cluster radius, the less clustered points there's to keep track of and render. +For more information, see [Clustering point data in the Web SDK]. ### Use weighted clustered heat maps -The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there will be little visual difference in the rendered heat map. Using a larger cluster radius will improve performance more but may reduce the resolution of the rendered heat map. +The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there's little visual difference in the rendered heat map. Using a larger cluster radius improves performance more but may reduce the resolution of the rendered heat map. ```javascript var layer = new atlas.layer.HeatMapLayer(source, null, { var layer = new atlas.layer.HeatMapLayer(source, null, { }); ``` -Learn more in the [Clustering and heat maps in this document](clustering-point-data-web-sdk.md#clustering-and-the-heat-maps-layer) +For more information, see [Clustering and the heat maps layer]. ### Keep image resources small -Images can be added to the maps image sprite for rendering icons in a symbol layer or patterns in a polygon layer. Keep these images small to minimize the amount of data that has to be downloaded and the amount of space they take up in the maps image sprite. When using a symbol layer that scales the icon using the `size` option, use an image that is the maximum size your plan to display on the map and no bigger. This ensures the icon is rendered with high resolution while minimizing the resources it uses. Additionally, SVG's can also be used as a smaller file format for simple icon images. +Images can be added to the maps image sprite for rendering icons in a symbol layer or patterns in a polygon layer. Keep these images small to minimize the amount of data that has to be downloaded and the amount of space they take up in the maps image sprite. When using a symbol layer that scales the icon using the `size` option, use an image that is the maximum size your plan to display on the map and no bigger. This ensures the icon is rendered with high resolution while minimizing the resources it uses. Additionally, SVGs can also be used as a smaller file format for simple icon images. ## Optimize expressions -[Data-driven style expressions](data-driven-style-expressions-web-sdk.md) provide a lot of flexibility and power for filtering and styling data on the map. There are many ways in which expressions can be optimized. Here are a few tips. +[Data-driven style expressions] provide flexibility and power for filtering and styling data on the map. There are many ways in which expressions can be optimized. Here are a few tips. ### Reduce the complexity of filters Filters loop over all data in a data source and check to see if each filter matc ### Make sure expressions don't produce errors -Expressions are often used to generate code to perform calculations or logical operations at render time. Just like the code in the rest of your application, be sure the calculations and logical make sense and are not error prone. Errors in expressions will cause issues in evaluating the expression, which can result in reduced performance and rendering issues. +Expressions are often used to generate code to perform calculations or logical operations at render time. Just like the code in the rest of your application, be sure the calculations and logical make sense and aren't error prone. Errors in expressions cause issues in evaluating the expression, which can result in reduced performance and rendering issues. One common error to be mindful of is having an expression that relies on a feature property that might not exist on all features. For example, the following code uses an expression to set the color property of a bubble layer to the `myColor` property of a feature. var layer = new atlas.layer.BubbleLayer(source, null, { }); ``` -The above code will function fine if all features in the data source have a `myColor` property, and the value of that property is a color. This may not be an issue if you have complete control of the data in the data source and know for certain all features will have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green. +The above code functions fine if all features in the data source have a `myColor` property, and the value of that property is a color. This may not be an issue if you have complete control of the data in the data source and know for certain all features have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green. ```javascript var layer = new atlas.layer.BubbleLayer(source, null, { var layer = new atlas.layer.BubbleLayer(source, null, { ### Order boolean expressions from most specific to least specific -When using boolean expressions that contain multiple conditional tests, order the conditional tests from most specific to least specific. By doing this, the first condition should reduce the amount of data the second condition has to be tested against, thus reducing the total number of conditional tests that need to be performed. +Reduce the total number of conditional tests required when using boolean expressions that contain multiple conditional tests by ordering them from most to least specific. ### Simplify expressions -Expressions can be powerful and sometimes complex. The simpler an expression is, the faster it will be evaluated. For example, if a simple comparison is needed, an expression like `['==', ['get', 'category'], 'restaurant']` would be better than using a match expression like `['match', ['get', 'category'], 'restaurant', true, false]`. In this case, if the property being checked is a boolean value, a `get` expression would be even simpler `['get','isRestaurant']`. +Expressions can be powerful and sometimes complex. The simpler an expression is, the faster it's evaluated. For example, if a simple comparison is needed, an expression like `['==', ['get', 'category'], 'restaurant']` would be better than using a match expression like `['match', ['get', 'category'], 'restaurant', true, false]`. In this case, if the property being checked is a boolean value, a `get` expression would be even simpler `['get','isRestaurant']`. ## Web SDK troubleshooting The following are some tips to debugging some of the common issues encountered w **Why doesn't the map display when I load the web control?** -Do the following: +Things to check: -* Ensure that you have added your added authentication options to the map. If this is not added, the map will load with a blank canvas since it can't access the base map data without authentication and 401 errors will appear in the network tab of the browser's developer tools. +* Ensure that you complete your authentication options in the map. Without authentication, the map loads a blank canvas and returns a 401 error in the network tab of the browser's developer tools. * Ensure that you have an internet connection. * Check the console for errors of the browser's developer tools. Some errors may cause the map not to render. Debug your application.-* Ensure you are using a [supported browser](supported-browsers.md). +* Ensure you're using a [supported browser]. **All my data is showing up on the other side of the world, what's going on?** -Coordinates, also referred to as positions, in the Azure Maps SDKs aligns with the geospatial industry standard format of `[longitude, latitude]`. This same format is also how coordinates are defined in the GeoJSON schema; the core data formatted used within the Azure Maps SDKs. If your data is appearing on the opposite side of the world, it is most likely due to the longitude and latitude values being reversed in your coordinate/position information. +Coordinates, also referred to as positions, in the Azure Maps SDKs aligns with the geospatial industry standard format of `[longitude, latitude]`. This same format is also how coordinates are defined in the GeoJSON schema; the core data formatted used within the Azure Maps SDKs. If your data is appearing on the opposite side of the world, it's most likely due to the longitude and latitude values being reversed in your coordinate/position information. **Why are HTML markers appearing in the wrong place in the web control?** Things to check: **Why are icons or text in the symbol layer appearing in the wrong place?** -Check that the `anchor` and the `offset` options are correctly configured to align with the part of your image or text that you want to have aligned with the coordinate on the map. -If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols we will rotate with the maps viewport so that they appear upright to the user. However, depending on your scenario, it may be desirable to lock the symbol to the map's orientation. Set the `rotationAlignment` option to `'map'` to do this. -If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols we will stay upright with the maps viewport as the map is pitched or tilted. However, depending on your scenario, it may be desirable to lock the symbol to the map's pitch. Set the `pitchAlignment` option to `'map'` to do this. +Check that the `anchor` and the `offset` options are configured correctly to align with the part of your image or text that you want to have aligned with the coordinate on the map. +If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols rotate with the maps viewport, appearing upright to the user. However, depending on your scenario, it may be desirable to lock the symbol to the map's orientation by setting the `rotationAlignment` option to `map`. ++If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols stay upright in the maps viewport when the map is pitched or tilted. However, depending on your scenario, it may be desirable to lock the symbol to the map's pitch by setting the `pitchAlignment` option to `map`. **Why isn't any of my data appearing on the map?** Things to check: * Check the console in the browser's developer tools for errors. * Ensure that a data source has been created and added to the map, and that the data source has been connected to a rendering layer that has also been added to the map.-* Add break points in your code and step through it to ensure data is being added to the data source and the data source and layers are being added to the map without any errors occurring. +* Add break points in your code and step through it. Ensure data is added to the data source and the data source and layers are added to the map. * Try removing data-driven expressions from your rendering layer. It's possible that one of them may have an error in it that is causing the issue. **Can I use the Azure Maps Web SDK in a sandboxed iframe?** Yes. -> [!TIP] -> Safari has a [bug](https://bugs.webkit.org/show_bug.cgi?id=170075) that prevents sandboxed iframes from running web workers, a requirement of the Azure Maps Web SDK. The solution is to add the `"allow-same-origin"` tag to the sandbox property of the iframe. - ## Get support The following are the different ways to get support for Azure Maps depending on your issue. **How do I report a data issue or an issue with an address?** -Report data issues using the [Azure Maps data feedback tool](https://feedback.azuremaps.com). Detailed instructions on reporting data issues are provided in the [Provide data feedback to Azure Maps](how-to-use-feedback-tool.md) article. +Report issues using the [Azure Maps feedback] site. Detailed instructions on reporting data issues are provided in the [Provide data feedback to Azure Maps] article. > [!NOTE] > Each issue submitted generates a unique URL to track it. Resolution times vary depending on issue type and the time required to verify the change is correct. The changes will appear in the render services weekly update, while other services such as geocoding and routing are updated monthly. **How do I report a bug in a service or API?** -Report issues on Azure's [Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) page by selecting the **Create a support request** button. +Report issues on Azure's [Help + support] page by selecting the **Create a support request** button. **Where do I get technical help for Azure Maps?** -* For questions related to the Azure Maps Power BI visual, contact [Power BI support](https://powerbi.microsoft.com/support/). +* For questions related to the Azure Maps Power BI visual, contact [Power BI support]. -* For all other Azure Maps services, contact [Azure support](https://azure.com/support). +* For all other Azure Maps services, contact [Azure support]. -* For question or comments on specific Azure Maps Features, use the [Azure Maps developer forums](/answers/topics/azure-maps.html). +* For question or comments on specific Azure Maps Features, use the [Azure Maps developer forums]. ## Next steps See the following articles for more tips on improving the user experience in your application. > [!div class="nextstepaction"]-> [Make your application accessible](map-accessibility.md) +> [Make your application accessible] Learn more about the terminology used by Azure Maps and the geospatial industry. > [!div class="nextstepaction"]-> [Azure Maps glossary](glossary.md) +> [Azure Maps glossary] ++[Authentication and authorization best practices]: authentication-best-practices.md +[awesome-vector-tiles]: https://github.com/mapbox/awesome-vector-tiles#awesome-vector-tiles- +[Azure Maps Creator platform]: creator-indoor-maps.md +[Azure Maps developer forums]: /answers/topics/azure-maps.html +[Azure Maps feedback]: https://feedback.azuremaps.com +[Azure Maps glossary]: glossary.md +[Azure support]: https://azure.com/support +[azure-maps-control]: https://www.npmjs.com/package/azure-maps-control?activeTab=versions +[bug]: https://bugs.webkit.org/show_bug.cgi?id=170075 +[Clustering and the heat maps layer]: clustering-point-data-web-sdk.md#clustering-and-the-heat-maps-layer +[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md +[Create a data source]: create-data-source-web-sdk.md +[Data-driven style expressions]: data-driven-style-expressions-web-sdk.md +[data-driven styles]: data-driven-style-expressions-web-sdk.md +[Help + support]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview +[Make your application accessible]: map-accessibility.md +[Power BI support]: https://powerbi.microsoft.com/support +[Provide data feedback to Azure Maps]: how-to-use-feedback-tool.md +[supported browser]: supported-browsers.md +[Tippecanoe]: https://github.com/mapbox/tippecanoe +[useful tools for working with GeoJSON data]: https://github.com/tmcw/awesome-geojson |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn how to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 03/22/2023 Last updated : 05/14/2023 Legacy table: availabilityResults |name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |sdkVersion|string|SDKVersion|string| Legacy table: browserTimings |networkDuration|real|NetworkDurationMs|real| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |processingDuration|real|ProcessingDurationMs|real| Legacy table: dependencies |name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |resultCode|string|ResultCode|string| Legacy table: customEvents |name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |sdkVersion|string|SDKVersion|string| |session_Id|string|SessionId|string| Legacy table: customMetrics |name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |sdkVersion|string|SDKVersion|string| |session_Id|string|SessionId|string| Legacy table: pageViews |name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |sdkVersion|string|SDKVersion|string| Legacy table: performanceCounters |name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |sdkVersion|string|SDKVersion|string| |session_Id|string|SessionId|string| Legacy table: requests |name|string|Name|String| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|String| |resultCode|string|ResultCode|String| Legacy table: exceptions |method|string|Method|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|-|operation_ParentId|string|OperationParentId|string| +|operation_ParentId|string|ParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |outerAssembly|string|OuterAssembly|string| |outerMessage|string|OuterMessage|string| |
azure-monitor | Separate Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md | Be aware that: To make it easier to change the instrumentation key as the code moves between stages of production, reference the key dynamically in code instead of using a hardcoded or static value. -Set the key in an initialization method, such as `global.aspx.cs`, in an ASP.NET service: +Set the key in an initialization method, such as `global.asax.cs`, in an ASP.NET service: ```csharp protected void Application_Start() |
azure-resource-manager | Key Vault Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/key-vault-access.md | Title: Use Azure Key Vault when deploying Managed Applications description: Shows how to access secrets in Azure Key Vault when deploying Managed Applications. Previously updated : 10/04/2022 Last updated : 04/14/2023 # Access Key Vault secret when deploying Azure Managed Applications This article describes how to configure the Key Vault to work with Managed Appli :::image type="content" source="./media/key-vault-access/open-key-vault.png" alt-text="Screenshot of the Azure home page to open a key vault using search or by selecting key vault."::: -1. Select **Access policies**. +1. Select **Access configuration**. - :::image type="content" source="./media/key-vault-access/select-access-policies.png" alt-text="Screenshot of the key vault setting to select access policies."::: + :::image type="content" source="./media/key-vault-access/select-access-configuration.png" alt-text="Screenshot of the key vault setting to select access configuration."::: -1. Select **Azure Resource Manager for template deployment**. Then, select **Save**. +1. Select **Azure Resource Manager for template deployment**. Then, select **Apply**. - :::image type="content" source="./media/key-vault-access/enable-template.png" alt-text="Screenshot of the key vault's access policies that enable Azure Resource Manager for template deployment."::: + :::image type="content" source="./media/key-vault-access/enable-template.png" alt-text="Screenshot of the key vault's access configuration that enables Azure Resource Manager for template deployment."::: ## Add service as contributor -Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). +Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope. For detailed steps, go to [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). -The **Appliance Resource Provider** is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can see if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider. +The **Appliance Resource Provider** is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can verify if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider. ## Reference Key Vault secret To pass a secret from a Key Vault to a template in your Managed Application, you "resources": [ { "type": "Microsoft.Resources/deployments",- "apiVersion": "2021-04-01", + "apiVersion": "2022-09-01", "name": "dynamicSecret", "properties": { "mode": "Incremental", To pass a secret from a Key Vault to a template in your Managed Application, you "resources": [ { "type": "Microsoft.Sql/servers",- "apiVersion": "2022-02-01-preview", + "apiVersion": "2022-05-01-preview", "name": "[variables('sqlServerName')]", "location": "[parameters('location')]", "properties": { To pass a secret from a Key Vault to a template in your Managed Application, you You've configured your Key Vault to be accessible during deployment of a Managed Application. -- For information about passing a value from a Key Vault as a template parameter, see [Use Azure Key Vault to pass secure parameter value during deployment](../templates/key-vault-parameter.md).-- To learn more about key vault security, see [Azure Key Vault security](../../key-vault/general/security-features.md) and [Authentication in Azure Key Vault](../../key-vault/general/authentication.md).-- For managed application examples, see [Sample projects for Azure managed applications](sample-projects.md).-- To learn how to create a UI definition file for a managed application, see [Get started with CreateUiDefinition](create-uidefinition-overview.md).+- For information about passing a value from a Key Vault as a template parameter, go to [Use Azure Key Vault to pass secure parameter value during deployment](../templates/key-vault-parameter.md). +- To learn more about key vault security, go to [Azure Key Vault security](../../key-vault/general/security-features.md) and [Authentication in Azure Key Vault](../../key-vault/general/authentication.md). +- For managed application examples, go to [Sample projects for Azure managed applications](sample-projects.md). +- To learn how to create a UI definition file for a managed application, go to [Get started with CreateUiDefinition](create-uidefinition-overview.md). |
azure-video-indexer | Logic Apps Connector Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md | Title: The Azure Video Indexer connectors with Logic App and Power Automate. description: This tutorial shows how to unlock new experiences and monetization opportunities Azure Video Indexer connectors with Logic App and Power Automate. -+ Last updated 09/21/2020 |
azure-video-indexer | Monitor Video Indexer Data Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md | Title: Monitoring Azure Video Indexer data reference #Required; *your official service name* + Title: Monitoring Azure Video Indexer data reference description: Important reference material needed when you monitor Azure Video Indexer -+ --++ Previously updated : 05/10/2022 #Required; mm/dd/yyyy format. Last updated : 05/10/2022 <!-- VERSION 2.3 Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. --> |
azure-video-indexer | Monitor Video Indexer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md | Title: Monitoring Azure Video Indexer #Required; Must be "Monitoring *Azure Video Indexer* -description: Start here to learn how to monitor Azure Video Indexer #Required; + Title: Monitoring Azure Video Indexer +description: Start here to learn how to monitor Azure Video Indexer ---+++ Previously updated : 12/19/2022 #Required; mm/dd/yyyy format. Last updated : 12/19/2022 <!-- VERSION 2.2 |
azure-vmware | Azure Vmware Solution Platform Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md | Last updated 3/16/2023 Microsoft will regularly apply important updates to the Azure VMware Solution for new features and software lifecycle management. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management). +## April 2023 ++Introducing run commands for HCX on Azure VMware solutions. You can use these run commands to restart HCX cloud manager in your Azure VMware solution private cloud. Additionally, you can also scale HCX cloud manager using run commands. To learn how to use run commands for HCX, see [Use HCX Run commands](use-hcx-run-commands.md). ## February 2023 |
azure-vmware | Concepts Run Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md | Azure VMware Solution supports the following operations: - [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) +- [Use HCX Run commands](use-hcx-run-commands.md) + >[!NOTE] >Run commands are executed one at a time in the order submitted. |
bastion | Connect Ip Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md | -# Connect to a VM via specified private IP address through the portal +# Connect to a VM via specified private IP address IP-based connection lets you connect to your on-premises, non-Azure, and Azure virtual machines via Azure Bastion over ExpressRoute or a VPN site-to-site connection using a specified private IP address. The steps in this article show you how to configure your Bastion deployment, and then connect to an on-premises resource using IP-based connection. For more information about Azure Bastion, see the [Overview](bastion-overview.md). Before you begin these steps, verify that you have the following environment set 1. Select **Apply** to apply the changes. It takes a few minutes for the Bastion configuration to complete. -## Connect to VM +## Connect to VM - Azure portal 1. To connect to a VM using a specified private IP address, you make the connection from Bastion to the VM, not directly from the VM page. On your Bastion page, select **Connect** to open the Connect page. Before you begin these steps, verify that you have the following environment set 1. Select **Connect** to connect to your virtual machine. +## Connect to VM - native client ++You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunnelling. Note that this feature does not support Azure Active Directory authentication or custom port and protocol at the moment. To learn more about configuring native client support, see [Connect to a VM - native client](connect-native-client-windows.md). Use the following commands as examples: ++ **RDP:** + + ```azurecli + az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress> + ``` + + **SSH:** + + ```azurecli + az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" + ``` + + **Tunnel:** + + ```azurecli + az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" + ``` ++ ## Next steps -Read the [Bastion FAQ](bastion-faq.md) for additional information. +Read the [Bastion FAQ](bastion-faq.md) for additional information. |
bastion | Connect Native Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md | This connection supports file upload from the local computer to the target VM. F ssh <username>@127.0.0.1 -p <LocalMachinePort> ``` +## <a name="connect-IP"></a>Connect to VM - IP Address ++This section helps you connect to your on-premises, non-Azure, and Azure virtual machines via Azure Bastion using a specified private IP address from native client. You can replace `--target-resource-id` with `--target-ip-address` in any of the above commands with the specified IP address to connect to your VM. ++> [!Note] +> This feature does not support support Azure AD authentication or custom port and protocol at the moment. For more information on IP-based connection, see [Connect to a VM - IP address](connect-ip-address.md). ++Use the following commands as examples: +++ **RDP:** + + ```azurecli + az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress> + ``` + + **SSH:** + + ```azurecli + az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" + ``` + + **Tunnel:** + + ```azurecli + az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" + ``` ++ ## Next steps [Upload or download files](vm-upload-download-native.md) |
batch | Batch Automatic Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md | Title: Automatically scale compute nodes in an Azure Batch pool -description: Enable automatic scaling on a cloud pool to dynamically adjust the number of compute nodes in the pool. + Title: Autoscale compute nodes in an Azure Batch pool +description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 04/06/2023 Last updated : 04/12/2023 -# Create an automatic formula for scaling compute nodes in a Batch pool ++# Create a formula to automatically scale compute nodes in a Batch pool Azure Batch can automatically scale pools based on parameters that you define, saving you time and money. With automatic scaling, Batch dynamically adds nodes to a pool as task demands increase, and removes compute nodes as task demands decrease. -To enable automatic scaling on a pool of compute nodes, you associate the pool with an *autoscale formula* that you define. The Batch service uses the autoscale formula to determine how many nodes are needed to execute your workload. These nodes may be dedicated nodes or [Azure Spot nodes](batch-spot-vms.md). Batch periodically reviews service metrics data and uses it to adjust the number of nodes in the pool based on your formula and at an interval that you define. +To enable automatic scaling on a pool of compute nodes, you associate the pool with an *autoscale formula* that you define. The Batch service uses the autoscale formula to determine how many nodes are needed to execute your workload. These nodes can be dedicated nodes or [Azure Spot nodes](batch-spot-vms.md). Batch periodically reviews service metrics data and uses it to adjust the number of nodes in the pool based on your formula and at an interval that you define. -You can enable automatic scaling when you create a pool, or apply it to an existing pool. Batch enables you to evaluate your formulas before assigning them to pools and to monitor the status of automatic scaling runs. Once you configure a pool with automatic scaling, you can make changes to the formula later. +You can enable automatic scaling when you create a pool, or apply it to an existing pool. Batch lets you evaluate your formulas before assigning them to pools and to monitor the status of automatic scaling runs. Once you configure a pool with automatic scaling, you can make changes to the formula later. > [!IMPORTANT]-> When you create a Batch account, you can specify the [pool allocation mode](accounts.md), which determines whether pools are allocated in a Batch service subscription (the default) or in your user subscription. If you created your Batch account with the default Batch service configuration, then your account is limited to a maximum number of cores that can be used for processing. The Batch service scales compute nodes only up to that core limit. For this reason, the Batch service may not reach the target number of compute nodes specified by an autoscale formula. See [Quotas and limits for the Azure Batch service](batch-quota-limit.md) for information on viewing and increasing your account quotas. +> When you create a Batch account, you can specify the [pool allocation mode](accounts.md), which determines whether pools are allocated in a Batch service subscription (the default) or in your user subscription. If you created your Batch account with the default Batch service configuration, then your account is limited to a maximum number of cores that can be used for processing. The Batch service scales compute nodes only up to that core limit. For this reason, the Batch service might not reach the target number of compute nodes specified by an autoscale formula. To learn how to view and increase your account quotas, see [Quotas and limits for the Azure Batch service](batch-quota-limit.md). > >If you created your account with user subscription mode, then your account shares in the core quota for the subscription. For more information, see [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limits) in [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). You can enable automatic scaling when you create a pool, or apply it to an exist An autoscale formula is a string value that you define that contains one or more statements. The autoscale formula is assigned to a pool's [autoScaleFormula](/rest/api/batchservice/enable-automatic-scaling-on-a-pool) element (Batch REST) or [CloudPool.AutoScaleFormula](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleformula) property (Batch .NET). The Batch service uses your formula to determine the target number of compute nodes in the pool for the next interval of processing. The formula string can't exceed 8 KB, can include up to 100 statements that are separated by semicolons, and can include line breaks and comments. -You can think of automatic scaling formulas as a Batch autoscale "language." Formula statements are free-formed expressions that can include both service-defined variables (defined by the Batch service) and user-defined variables. Formulas can perform various operations on these values by using built-in types, operators, and functions. For example, a statement might take the following form: +You can think of automatic scaling formulas as a Batch autoscale "language." Formula statements are free-formed expressions that can include both *service-defined variables*, which are defined by the Batch service, and *user-defined variables*. Formulas can perform various operations on these values by using built-in types, operators, and functions. For example, a statement might take the following form: ``` $myNewVariable = function($ServiceDefinedVariable, $myCustomVariable); ``` -Formulas generally contain multiple statements that perform operations on values that are obtained in previous statements. For example, first we obtain a value for `variable1`, then pass it to a function to populate `variable2`: +Formulas generally contain multiple statements that perform operations on values that are obtained in previous statements. For example, first you obtain a value for `variable1`, then pass it to a function to populate `variable2`: ``` $variable1 = function1($ServiceDefinedVariable); $variable2 = function2($OtherServiceDefinedVariable, $variable1); Include these statements in your autoscale formula to arrive at a target number of compute nodes. Dedicated nodes and Spot nodes each have their own target settings. An autoscale formula can include a target value for dedicated nodes, a target value for Spot nodes, or both. -The target number of nodes may be higher, lower, or the same as the current number of nodes of that type in the pool. Batch evaluates a pool's autoscale formula at a specific [automatic scaling intervals](#automatic-scaling-interval). Batch adjusts the target number of each type of node in the pool to the number that your autoscale formula specifies at the time of evaluation. +The target number of nodes might be higher, lower, or the same as the current number of nodes of that type in the pool. Batch evaluates a pool's autoscale formula at specific [automatic scaling intervals](#automatic-scaling-interval). Batch adjusts the target number of each type of node in the pool to the number that your autoscale formula specifies at the time of evaluation. ### Sample autoscale formulas -Below are examples of two autoscale formulas, which can be adjusted to work for most scenarios. The variables `startingNumberOfVMs` and `maxNumberofVMs` in the example formulas can be adjusted to your needs. +The following examples show two autoscale formulas, which can be adjusted to work for most scenarios. The variables `startingNumberOfVMs` and `maxNumberofVMs` in the example formulas can be adjusted to your needs. #### Pending tasks $NodeDeallocationOption = taskcompletion; #### Preempted nodes -This example creates a pool that starts with 25 Spot nodes. Every time a Spot node is preempted, it's replaced with a dedicated node. As with the first example, the `maxNumberofVMs` variable prevents the pool from exceeding 25 VMs. This example is useful for taking advantage of Spot VMs while also ensuring that only a fixed number of pre-emptions occur for the lifetime of the pool. +This example creates a pool that starts with 25 Spot nodes. Every time a Spot node is preempted, it's replaced with a dedicated node. As with the first example, the `maxNumberofVMs` variable prevents the pool from exceeding 25 VMs. This example is useful for taking advantage of Spot VMs while also ensuring that only a fixed number of preemptions occur for the lifetime of the pool. ``` maxNumberofVMs = 25; $TargetLowPriorityNodes = min(maxNumberofVMs , maxNumberofVMs - $TargetDedicated $NodeDeallocationOption = taskcompletion; ``` -You'll learn more about [how to create autoscale formulas](#write-an-autoscale-formula) and see more [example autoscale formulas](#example-autoscale-formulas) later in this topic. +You'll learn more about [how to create autoscale formulas](#write-an-autoscale-formula) and see more [example autoscale formulas](#example-autoscale-formulas) later in this article. ## Variables -You can use both **service-defined** and **user-defined** variables in your autoscale formulas. +You can use both *service-defined* and *user-defined* variables in your autoscale formulas. The service-defined variables are built in to the Batch service. Some service-defined variables are read-write, and some are read-only. -User-defined variables are variables that you define. In the example formula shown above, `$TargetDedicatedNodes` and `$PendingTasks` are service-defined variables, while `startingNumberOfVMs` and `maxNumberofVMs` are user-defined variables. +User-defined variables are variables that you define. In the previous example, `$TargetDedicatedNodes` and `$PendingTasks` are service-defined variables, while `startingNumberOfVMs` and `maxNumberofVMs` are user-defined variables. > [!NOTE] > Service-defined variables are always preceded by a dollar sign ($). For user-defined variables, the dollar sign is optional. You can get and set the values of these service-defined variables to manage the | Variable | Description | | | |-| $TargetDedicatedNodes |The target number of dedicated compute nodes for the pool. This is specified as a target because a pool may not always achieve the desired number of nodes. For example, if the target number of dedicated nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool may not reach the target. <br /><br /> A pool in an account created in Batch service mode may not achieve its target if the target exceeds a Batch account node or core quota. A pool in an account created in user subscription mode may not achieve its target if the target exceeds the shared core quota for the subscription.| -| $TargetLowPriorityNodes |The target number of Spot compute nodes for the pool. This specified as a target because a pool may not always achieve the desired number of nodes. For example, if the target number of Spot nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool may not reach the target. A pool may also not achieve its target if the target exceeds a Batch account node or core quota. <br /><br /> For more information on Spot compute nodes, see [Use Spot VMs with Batch](batch-spot-vms.md). | -| $NodeDeallocationOption |The action that occurs when compute nodes are removed from a pool. Possible values are:<ul><li>**requeue**: The default value. Ends tasks immediately and puts them back on the job queue so that they're rescheduled. This action ensures the target number of nodes is reached as quickly as possible. However, it may be less efficient, as any running tasks are interrupted and restarted. <li>**terminate**: Ends tasks immediately and removes them from the job queue.<li>**taskcompletion**: Waits for currently running tasks to finish and then removes the node from the pool. Use this option to avoid tasks being interrupted and requeued, wasting any work the task has done.<li>**retaineddata**: Waits for all the local task-retained data on the node to be cleaned up before removing the node from the pool.</ul> | +| $TargetDedicatedNodes |The target number of dedicated compute nodes for the pool. Specified as a target because a pool might not always achieve the desired number of nodes. For example, if the target number of dedicated nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool might not reach the target. <br><br> A pool in an account created in Batch service mode might not achieve its target if the target exceeds a Batch account node or core quota. A pool in an account created in user subscription mode might not achieve its target if the target exceeds the shared core quota for the subscription.| +| $TargetLowPriorityNodes |The target number of Spot compute nodes for the pool. Specified as a target because a pool might not always achieve the desired number of nodes. For example, if the target number of Spot nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool might not reach the target. A pool might also not achieve its target if the target exceeds a Batch account node or core quota. <br><br> For more information on Spot compute nodes, see [Use Spot VMs with Batch](batch-spot-vms.md). | +| $NodeDeallocationOption |The action that occurs when compute nodes are removed from a pool. Possible values are:<br>- **requeue**: The default value. Ends tasks immediately and puts them back on the job queue so that they're rescheduled. This action ensures the target number of nodes is reached as quickly as possible. However, it might be less efficient, because any running tasks are interrupted and then must be restarted. <br>- **terminate**: Ends tasks immediately and removes them from the job queue.<br>- **taskcompletion**: Waits for currently running tasks to finish and then removes the node from the pool. Use this option to avoid tasks being interrupted and requeued, wasting any work the task has done.<br>- **retaineddata**: Waits for all the local task-retained data on the node to be cleaned up before removing the node from the pool. | > [!NOTE]-> The `$TargetDedicatedNodes` variable can also be specified using the alias `$TargetDedicated`. Similarly, the `$TargetLowPriorityNodes` variable can be specified using the alias `$TargetLowPriority`. If both the fully named variable and its alias are set by the formula, the value assigned to the fully named variable will take precedence. +> The `$TargetDedicatedNodes` variable can also be specified using the alias `$TargetDedicated`. Similarly, the `$TargetLowPriorityNodes` variable can be specified using the alias `$TargetLowPriority`. If both the fully named variable and its alias are set by the formula, the value assigned to the fully named variable takes precedence. ### Read-only service-defined variables You can get the value of these service-defined variables to make adjustments that are based on metrics from the Batch service. > [!IMPORTANT]-> Job release tasks aren't currently included in variables that provide task counts, such as $ActiveTasks and $PendingTasks. Depending on your autoscale formula, this can result in nodes being removed with no nodes available to run job release tasks. +> Job release tasks aren't currently included in variables that provide task counts, such as `$ActiveTasks` and `$PendingTasks`. Depending on your autoscale formula, this can result in nodes being removed with no nodes available to run job release tasks. > [!TIP] > These read-only service-defined variables are *objects* that provide various methods to access data associated with each. For more information, see [Obtain sample data](#obtain-sample-data) later in this article. You can get the value of these service-defined variables to make adjustments tha | $NetworkInBytes |The number of inbound bytes. Retiring after 2024-Mar-31. | | $NetworkOutBytes |The number of outbound bytes. Retiring after 2024-Mar-31. | | $SampleNodeCount |The count of compute nodes. Retiring after 2024-Mar-31. |-| $ActiveTasks |The number of tasks that are ready to execute but aren't yet executing. This includes all tasks that are in the active state and whose dependencies have been satisfied. Any tasks that are in the active state but whose dependencies haven't been satisfied are excluded from the $ActiveTasks count. For a multi-instance task, $ActiveTasks includes the number of instances set on the task.| +| $ActiveTasks |The number of tasks that are ready to execute but aren't yet executing. This includes all tasks that are in the active state and whose dependencies have been satisfied. Any tasks that are in the active state but whose dependencies haven't been satisfied are excluded from the `$ActiveTasks` count. For a multi-instance task, `$ActiveTasks` includes the number of instances set on the task.| | $RunningTasks |The number of tasks in a running state. |-| $PendingTasks |The sum of $ActiveTasks and $RunningTasks. | +| $PendingTasks |The sum of `$ActiveTasks` and `$RunningTasks`. | | $SucceededTasks |The number of tasks that finished successfully. | | $FailedTasks |The number of tasks that failed. | | $TaskSlotsPerNode |The number of task slots that can be used to run concurrent tasks on a single compute node in the pool. | You can get the value of these service-defined variables to make adjustments tha > before this date. > [!WARNING]-> `$PreemptedNodeCount` is currently not available and will return `0` valued data. +> `$PreemptedNodeCount` is currently not available and returns `0` valued data. > [!NOTE] > Use `$RunningTasks` when scaling based on the number of tasks running at a point in time, and `$ActiveTasks` when scaling based on the number of tasks that are queued up to run. Testing a double with a ternary operator (`double ? statement1 : statement2`), r ## Functions -You can use these predefined **functions** when defining an autoscale formula. +You can use these predefined *functions* when defining an autoscale formula. | Function | Return type | Description | | | | | The *doubleVecList* value is converted to a single *doubleVec* before evaluation ## Metrics -You can use both resource and task metrics when you're defining a formula. You adjust the target number of dedicated nodes in the pool based on the metrics data that you obtain and evaluate. For more information on each metric, see the [Variables](#variables) section above. --<table> - <tr> - <th>Metric</th> - <th>Description</th> - </tr> - <tr> - <td><b>Resource</b></td> - <td><p>Resource metrics are based on the CPU, the bandwidth, the memory usage of compute nodes, and the number of nodes.</p> - <p> These service-defined variables are useful for making adjustments based on node count:</p> - <p><ul> - <li>$TargetDedicatedNodes</li> - <li>$TargetLowPriorityNodes</li> - <li>$CurrentDedicatedNodes</li> - <li>$CurrentLowPriorityNodes</li> - <li>$PreemptedNodeCount</li> - <li>$SampleNodeCount</li> - </ul></p> - <p>These service-defined variables are useful for making adjustments based on node resource usage:</p> - <p><ul> - <li>$CPUPercent</li> - <li>$WallClockSeconds</li> - <li>$MemoryBytes</li> - <li>$DiskBytes</li> - <li>$DiskReadBytes</li> - <li>$DiskWriteBytes</li> - <li>$DiskReadOps</li> - <li>$DiskWriteOps</li> - <li>$NetworkInBytes</li> - <li>$NetworkOutBytes</li></ul></p> - </tr> - <tr> - <td><b>Task</b></td> - <td><p>Task metrics are based on the status of tasks, such as Active, Pending, and Completed. The following service-defined variables are useful for making pool-size adjustments based on task metrics:</p> - <p><ul> - <li>$ActiveTasks</li> - <li>$RunningTasks</li> - <li>$PendingTasks</li> - <li>$SucceededTasks</li> - <li>$FailedTasks</li></ul></p> - </td> - </tr> -</table> +You can use both resource and task metrics when you define a formula. You adjust the target number of dedicated nodes in the pool based on the metrics data that you obtain and evaluate. For more information on each metric, see the [Variables](#variables) section. ++| Metric | Description | +|-|--| +| Resource | Resource metrics are based on the CPU, the bandwidth, the memory usage of compute nodes, and the number of nodes.<br><br>These service-defined variables are useful for making adjustments based on node count:<br>- $TargetDedicatedNodes <br>- $TargetLowPriorityNodes <br>- $CurrentDedicatedNodes <br>- $CurrentLowPriorityNodes <br>- $PreemptedNodeCount <br>- $SampleNodeCount <br><br>These service-defined variables are useful for making adjustments based on node resource usage: <br>- $CPUPercent <br>- $WallClockSeconds <br>- $MemoryBytes <br>- $DiskBytes <br>- $DiskReadBytes <br>- $DiskWriteBytes <br>- $DiskReadOps <br>- $DiskWriteOps <br>- $NetworkInBytes <br>- $NetworkOutBytes | +| Task | Task metrics are based on the status of tasks, such as Active, Pending, and Completed. The following service-defined variables are useful for making pool-size adjustments based on task metrics: <br>- $ActiveTasks <br>- $RunningTasks <br>- $PendingTasks <br>- $SucceededTasks <br>- $FailedTasks | ## Obtain sample data -The core operation of an autoscale formula is to obtain task and resource metric data (samples), and then adjust pool size based on that data. As such, it's important to have a clear understanding of how autoscale formulas interact with samples. +The core operation of an autoscale formula is to obtain task and resource metrics data (samples), and then adjust pool size based on that data. As such, it's important to have a clear understanding of how autoscale formulas interact with samples. ### Methods -Autoscale formulas act on samples of metric data provided by the Batch service. A formula grows or shrinks the pool compute nodes based on the values that it obtains. Service-defined variables are objects that provide methods to access data that is associated with that object. For example, the following expression shows a request to get the last five minutes of CPU usage: +Autoscale formulas act on samples of metric data provided by the Batch service. A formula grows or shrinks the pool compute nodes based on the values that it obtains. Service-defined variables are objects that provide methods to access data that's associated with that object. For example, the following expression shows a request to get the last five minutes of CPU usage: ``` $CPUPercent.GetSample(TimeInterval_Minute * 5) ``` -The following methods may be used to obtain sample data about service-defined variables. +The following methods can be used to obtain sample data about service-defined variables. | Method | Description | | | |-| GetSample() |The `GetSample()` method returns a vector of data samples.<br/><br/>A sample is 30 seconds worth of metrics data. In other words, samples are obtained every 30 seconds. But as noted below, there's a delay between when a sample is collected and when it's available to a formula. As such, not all samples for a given time period may be available for evaluation by a formula.<ul><li>`doubleVec GetSample(double count)`: Specifies the number of samples to obtain from the most recent samples that were collected. `GetSample(1)` returns the last available sample. For metrics like `$CPUPercent`, however, `GetSample(1)` shouldn't be used, because it's impossible to know *when* the sample was collected. It could be recent, or, because of system issues, it might be much older. In such cases, it's better to use a time interval as shown below.<li>`doubleVec GetSample((timestamp or timeinterval) startTime [, double samplePercent])`: Specifies a time frame for gathering sample data. Optionally, it also specifies the percentage of samples that must be available in the requested time frame. For example, `$CPUPercent.GetSample(TimeInterval_Minute * 10)` would return 20 samples if all samples for the last 10 minutes are present in the `CPUPercent` history. If the last minute of history wasn't available, only 18 samples would be returned. In this case `$CPUPercent.GetSample(TimeInterval_Minute * 10, 95)` would fail because only 90 percent of the samples are available, but `$CPUPercent.GetSample(TimeInterval_Minute * 10, 80)` would succeed.<li>`doubleVec GetSample((timestamp or timeinterval) startTime, (timestamp or timeinterval) endTime [, double samplePercent])`: Specifies a time frame for gathering data, with both a start time and an end time. As mentioned above, there's a delay between when a sample is collected and when it becomes available to a formula. Consider this delay when you use the `GetSample` method. See `GetSamplePercent` below. | +| GetSample() |The `GetSample()` method returns a vector of data samples.<br><br>A sample is 30 seconds worth of metrics data. In other words, samples are obtained every 30 seconds. But as noted below, there's a delay between when a sample is collected and when it's available to a formula. As such, not all samples for a given time period might be available for evaluation by a formula. <br><br>- `doubleVec GetSample(double count)`: Specifies the number of samples to obtain from the most recent samples that were collected. `GetSample(1)` returns the last available sample. For metrics like `$CPUPercent`, however, `GetSample(1)` shouldn't be used, because it's impossible to know *when* the sample was collected. It could be recent, or, because of system issues, it might be much older. In such cases, it's better to use a time interval as shown below.<br><br>- `doubleVec GetSample((timestamp or timeinterval) startTime [, double samplePercent])`: Specifies a time frame for gathering sample data. Optionally, it also specifies the percentage of samples that must be available in the requested time frame. For example, `$CPUPercent.GetSample(TimeInterval_Minute * 10)` would return 20 samples if all samples for the last 10 minutes are present in the `CPUPercent` history. If the last minute of history wasn't available, only 18 samples would be returned. In this case `$CPUPercent.GetSample(TimeInterval_Minute * 10, 95)` would fail because only 90 percent of the samples are available, but `$CPUPercent.GetSample(TimeInterval_Minute * 10, 80)` would succeed.<br><br>- `doubleVec GetSample((timestamp or timeinterval) startTime, (timestamp or timeinterval) endTime [, double samplePercent])`: Specifies a time frame for gathering data, with both a start time and an end time. As mentioned above, there's a delay between when a sample is collected and when it becomes available to a formula. Consider this delay when you use the `GetSample` method. See `GetSamplePercent` below. | | GetSamplePeriod() |Returns the period of samples that were taken in a historical sample data set. |-| Count() |Returns the total number of samples in the metric history. | +| Count() |Returns the total number of samples in the metrics history. | | HistoryBeginTime() |Returns the time stamp of the oldest available data sample for the metric. | | GetSamplePercent() |Returns the percentage of samples that are available for a given time interval. For example, `doubleVec GetSamplePercent( (timestamp or timeinterval) startTime [, (timestamp or timeinterval) endTime] )`. Because the `GetSample` method fails if the percentage of samples returned is less than the `samplePercent` specified, you can use the `GetSamplePercent` method to check first. Then you can perform an alternate action if insufficient samples are present, without halting the automatic scaling evaluation. | ### Samples -The Batch service periodically takes samples of task and resource metrics and makes them available to your autoscale formulas. These samples are recorded every 30 seconds by the Batch service. However, there's typically a delay between when those samples were recorded and when they're made available to (and read by) your autoscale formulas. Additionally, samples may not be recorded for a particular interval because of factors such as network or other infrastructure issues. +The Batch service periodically takes samples of task and resource metrics and makes them available to your autoscale formulas. These samples are recorded every 30 seconds by the Batch service. However, there's typically a delay between when those samples were recorded and when they're made available to (and read by) your autoscale formulas. Additionally, samples might not be recorded for a particular interval because of factors such as network or other infrastructure issues. ### Sample percentage -When `samplePercent` is passed to the `GetSample()` method or the `GetSamplePercent()` method is called, _percent_ refers to a comparison between the total possible number of samples recorded by the Batch service and the number of samples that are available to your autoscale formula. +When `samplePercent` is passed to the `GetSample()` method or the `GetSamplePercent()` method is called, *percent* refers to a comparison between the total possible number of samples recorded by the Batch service and the number of samples that are available to your autoscale formula. -Let's look at a 10-minute timespan as an example. Because samples are recorded every 30 seconds within that 10-minute timespan, the maximum total number of samples recorded by Batch would be 20 samples (2 per minute). However, due to the inherent latency of the reporting mechanism and other issues within Azure, there may be only 15 samples that are available to your autoscale formula for reading. So, for example, for that 10-minute period, only 75% of the total number of samples recorded may be available to your formula. +Let's look at a 10-minute time span as an example. Because samples are recorded every 30 seconds within that 10-minute time span, the maximum total number of samples recorded by Batch would be 20 samples (2 per minute). However, due to the inherent latency of the reporting mechanism and other issues within Azure, there might be only 15 samples that are available to your autoscale formula for reading. So, for example, for that 10-minute period, only 75 percent of the total number of samples recorded might be available to your formula. ### GetSample() and sample ranges -Your autoscale formulas grow or shrink your pools by adding or removing nodes. Because nodes cost you money, be sure that your formulas use an intelligent method of analysis that is based on sufficient data. We recommend that you use a trending-type analysis in your formulas. This type grows and shrinks your pools based on a range of collected samples. +Your autoscale formulas grow and shrink your pools by adding or removing nodes. Because nodes cost you money, be sure that your formulas use an intelligent method of analysis that's based on sufficient data. It's recommended that you use a trending-type analysis in your formulas. This type grows and shrinks your pools based on a range of collected samples. To do so, use `GetSample(interval look-back start, interval look-back end)` to return a vector of samples: When Batch evaluates the above line, it returns a range of samples as a vector o $runningTasksSample=[1,1,1,1,1,1,1,1,1,1]; ``` -Once you've collected the vector of samples, you can then use functions like `min()`, `max()`, and `avg()` to derive meaningful values from the collected range. +After you collect the vector of samples, you can then use functions like `min()`, `max()`, and `avg()` to derive meaningful values from the collected range. To exercise extra caution, you can force a formula evaluation to fail if less than a certain sample percentage is available for a particular time period. When you force a formula evaluation to fail, you instruct Batch to cease further evaluation of the formula if the specified percentage of samples isn't available. In this case, no change is made to the pool size. To specify a required percentage of samples for the evaluation to succeed, specify it as the third parameter to `GetSample()`. Here, a requirement of 75 percent of samples is specified: To exercise extra caution, you can force a formula evaluation to fail if less th $runningTasksSample = $RunningTasks.GetSample(60 * TimeInterval_Second, 120 * TimeInterval_Second, 75); ``` -Because there may be a delay in sample availability, you should always specify a time range with a look-back start time that is older than one minute. It takes approximately one minute for samples to propagate through the system, so samples in the range `(0 * TimeInterval_Second, 60 * TimeInterval_Second)` may not be available. Again, you can use the percentage parameter of `GetSample()` to force a particular sample percentage requirement. +Because there might be a delay in sample availability, you should always specify a time range with a look-back start time that's older than one minute. It takes approximately one minute for samples to propagate through the system, so samples in the range `(0 * TimeInterval_Second, 60 * TimeInterval_Second)` might not be available. Again, you can use the percentage parameter of `GetSample()` to force a particular sample percentage requirement. > [!IMPORTANT]-> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you have, no matter how long ago you retrieved it." Since it's only a single sample, and it may be an older sample, it may not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on. +> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you have, no matter how long ago you retrieved it." Since it's only a single sample, and it might be an older sample, it might not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on. ## Write an autoscale formula -You build an autoscale formula by forming statements that use the above components, then combine those statements into a complete formula. In this section, we create an example autoscale formula that can perform real-world scaling decisions and make adjustments. +You build an autoscale formula by forming statements that use the above components, then combine those statements into a complete formula. In this section, you create an example autoscale formula that can perform real-world scaling decisions and make adjustments. First, let's define the requirements for our new autoscale formula. The formula should: First, let's define the requirements for our new autoscale formula. The formula - Always restrict the maximum number of dedicated nodes to 400. - When reducing the number of nodes, don't remove nodes that are running tasks; if necessary, wait until tasks have finished before removing nodes. -The first statement in our formula increases the number of nodes during high CPU usage. We define a statement that populates a user-defined variable (`$totalDedicatedNodes`) with a value that is 110 percent of the current target number of dedicated nodes, but only if the minimum average CPU usage during the last 10 minutes was above 70 percent. Otherwise, it uses the value for the current number of dedicated nodes. +The first statement in the formula increases the number of nodes during high CPU usage. You define a statement that populates a user-defined variable (`$totalDedicatedNodes`) with a value that is 110 percent of the current target number of dedicated nodes, but only if the minimum average CPU usage during the last 10 minutes was above 70 percent. Otherwise, it uses the value for the current number of dedicated nodes. ``` $totalDedicatedNodes = $totalDedicatedNodes = ($CurrentDedicatedNodes * 1.1) : $CurrentDedicatedNodes; ``` -To decrease the number of dedicated nodes during low CPU usage, the next statement in our formula sets the same `$totalDedicatedNodes` variable to 90 percent of the current target number of dedicated nodes, if average CPU usage in the past 60 minutes was under 20 percent. Otherwise, it uses the current value of `$totalDedicatedNodes` that we populated in the statement above. +To decrease the number of dedicated nodes during low CPU usage, the next statement in the formula sets the same `$totalDedicatedNodes` variable to 90 percent of the current target number of dedicated nodes, if average CPU usage in the past 60 minutes was under 20 percent. Otherwise, it uses the current value of `$totalDedicatedNodes` populated in the statement above. ``` $totalDedicatedNodes = $totalDedicatedNodes = ($CurrentDedicatedNodes * 0.9) : $totalDedicatedNodes; ``` -Now, we limit the target number of dedicated compute nodes to a maximum of 400. +Now, limit the target number of dedicated compute nodes to a maximum of 400. ```-$TargetDedicatedNodes = min(400, $totalDedicatedNodes) +$TargetDedicatedNodes = min(400, $totalDedicatedNodes); ``` -Finally, we ensure that nodes aren't removed until their tasks are finished. +Finally, ensure that nodes aren't removed until their tasks are finished. ``` $NodeDeallocationOption = taskcompletion; $totalDedicatedNodes = $totalDedicatedNodes = (avg($CPUPercent.GetSample(TimeInterval_Minute * 60)) < 0.2) ? ($CurrentDedicatedNodes * 0.9) : $totalDedicatedNodes;-$TargetDedicatedNodes = min(400, $totalDedicatedNodes) +$TargetDedicatedNodes = min(400, $totalDedicatedNodes); $NodeDeallocationOption = taskcompletion; ``` > [!NOTE]-> If you choose to, you can include both comments and line breaks in formula strings. Also be aware that missing semicolons may result in evaluation errors. +> If you choose, you can include both comments and line breaks in formula strings. Also be aware that missing semicolons might result in evaluation errors. ## Automatic scaling interval Pool autoscaling can be configured using any of the [Batch SDKs](batch-apis-tool To create a pool with autoscaling enabled in .NET, follow these steps: 1. Create the pool with [BatchClient.PoolOperations.CreatePool](/dotnet/api/microsoft.azure.batch.pooloperations.createpool).-1. Set the [CloudPool.AutoScaleEnabled](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleenabled) property to `true`. +1. Set the [CloudPool.AutoScaleEnabled](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleenabled) property to **true**. 1. Set the [CloudPool.AutoScaleFormula](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleformula) property with your autoscale formula. 1. (Optional) Set the [CloudPool.AutoScaleEvaluationInterval](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleevaluationinterval) property (default is 15 minutes). 1. Commit the pool with [CloudPool.Commit](/dotnet/api/microsoft.azure.batch.cloudpool.commit) or [CommitAsync](/dotnet/api/microsoft.azure.batch.cloudpool.commitasync). await pool.CommitAsync(); ``` > [!IMPORTANT]-> When you create an autoscale-enabled pool, don't specify the _targetDedicatedNodes_ parameter or the _targetLowPriorityNodes_ parameter on the call to **CreatePool**. Instead, specify the **AutoScaleEnabled** and **AutoScaleFormula** properties on the pool. The values for these properties determine the target number of each type of node. +> When you create an autoscale-enabled pool, don't specify the *targetDedicatedNodes* parameter or the *targetLowPriorityNodes* parameter on the call to `CreatePool`. Instead, specify the `AutoScaleEnabled` and `AutoScaleFormula` properties on the pool. The values for these properties determine the target number of each type of node. > > To manually resize an autoscale-enabled pool (for example, with [BatchClient.PoolOperations.ResizePoolAsync](/dotnet/api/microsoft.azure.batch.pooloperations.resizepoolasync)), you must first disable automatic scaling on the pool, then resize it. +> [!TIP] +> For more examples of using the .NET SDK, see the [Batch .NET Quickstart repository](https://github.com/Azure-Samples/batch-dotnet-quickstart) on GitHub. + ### Python -To create autoscale-enabled pool with the Python SDK: +To create an autoscale-enabled pool with the Python SDK: 1. Create a pool and specify its configuration. 1. Add the pool to the service client. response = batch_service_client.pool.enable_auto_scale(pool_id, auto_scale_formu ``` > [!TIP]-> More examples of using the Python SDK can be found in the [Batch Python Quickstart repository](https://github.com/Azure-Samples/batch-python-quickstart) on GitHub. +> For more examples of using the Python SDK, see the [Batch Python Quickstart repository](https://github.com/Azure-Samples/batch-python-quickstart) on GitHub. ## Enable autoscaling on an existing pool When you enable autoscaling on an existing pool, keep in mind: - If you omit either the autoscale formula or interval, the Batch service continues to use the current value of that setting. > [!NOTE]-> If you specified values for the *targetDedicatedNodes* or *targetLowPriorityNodes* parameters of the **CreatePool** method when you created the pool in .NET, or for the comparable parameters in another language, then those values are ignored when the autoscale formula is evaluated. +> If you specified values for the *targetDedicatedNodes* or *targetLowPriorityNodes* parameters of the `CreatePool` method when you created the pool in .NET, or for the comparable parameters in another language, then those values are ignored when the autoscale formula is evaluated. This C# example uses the [Batch .NET](/dotnet/api/microsoft.azure.batch) library to enable autoscaling on an existing pool. Before you can evaluate an autoscale formula, you must first enable autoscaling In this REST API request, specify the pool ID in the URI, and the autoscale formula in the *autoScaleFormula* element of the request body. The response of the operation contains any error information that might be related to the formula. -This [Batch .NET](/dotnet/api/microsoft.azure.batch) example evaluates an autoscale formula. If the pool doesn't already use autoscaling, we enable it first. +The following [Batch .NET](/dotnet/api/microsoft.azure.batch) example evaluates an autoscale formula. If the pool doesn't already use autoscaling, enable it first. ```csharp // First obtain a reference to an existing pool CloudPool pool = await batchClient.PoolOperations.GetPoolAsync("myExistingPool") // You can't evaluate an autoscale formula on a non-autoscale-enabled pool. if (pool.AutoScaleEnabled == false) {- // We need a valid autoscale formula to enable autoscaling on the + // You need a valid autoscale formula to enable autoscaling on the // pool. This formula is valid, but won't resize the pool: await pool.EnableAutoScaleAsync( autoscaleFormula: "$TargetDedicatedNodes = $CurrentDedicatedNodes;", autoscaleEvaluationInterval: TimeSpan.FromMinutes(5)); // Batch limits EnableAutoScaleAsync calls to once every 30 seconds.- // Because we want to apply our new autoscale formula below if it - // evaluates successfully, and we *just* enabled autoscaling on - // this pool, we pause here to ensure we pass that threshold. + // Because you want to apply our new autoscale formula below if it + // evaluates successfully, and you *just* enabled autoscaling on + // this pool, pause here to ensure you pass that threshold. Thread.Sleep(TimeSpan.FromSeconds(31)); // Refresh the properties of the pool so that we've got the if (pool.AutoScaleEnabled == false) await pool.RefreshAsync(); } -// We must ensure that autoscaling is enabled on the pool prior to +// You must ensure that autoscaling is enabled on the pool prior to // evaluating a formula if (pool.AutoScaleEnabled == true) { AutoScaleRun.Results: ## Get information about autoscale runs -It's recommended to periodically check the Batch service's evaluation of your autoscale formula. To do so, get -(or refresh) a reference to the pool, then examine the properties of its last autoscale run. +It's recommended to periodically check the Batch service's evaluation of your autoscale formula. To do so, get (or refresh) a reference to the pool, then examine the properties of its last autoscale run. In Batch .NET, the [CloudPool.AutoScaleRun](/dotnet/api/microsoft.azure.batch.cloudpool.autoscalerun) property has several properties that provide information about the latest automatic scaling run performed on the pool: In Batch .NET, the [CloudPool.AutoScaleRun](/dotnet/api/microsoft.azure.batch.cl - [AutoScaleRun.Results](/dotnet/api/microsoft.azure.batch.autoscalerun.results) - [AutoScaleRun.Error](/dotnet/api/microsoft.azure.batch.autoscalerun.error) -In the REST API, the [Get information about a pool](/rest/api/batchservice/get-information-about-a-pool) request returns information about the pool, which includes the latest automatic scaling run information in the [autoScaleRun](/rest/api/batchservice/get-information-about-a-pool) property. +In the REST API, the [Get information about a pool request](/rest/api/batchservice/get-information-about-a-pool) returns information about the pool, which includes the latest automatic scaling run information in the [autoScaleRun](/rest/api/batchservice/get-information-about-a-pool) property. -The following C# example uses the Batch .NET library to print information about the last autoscaling run on pool _myPool_. +The following C# example uses the Batch .NET library to print information about the last autoscaling run on pool *myPool*. ```csharp await Cloud pool = myBatchClient.PoolOperations.GetPoolAsync("myPool"); Error: You can also check automatic scaling history by querying [PoolAutoScaleEvent](batch-pool-autoscale-event.md). Batch emits this event to record each occurrence of autoscale formula evaluation and execution, which can be helpful to troubleshoot potential issues. Sample event for PoolAutoScaleEvent:+ ```json { "id": "poolId", $TargetDedicatedNodes = $isWorkingWeekdayHour ? 20:10; $NodeDeallocationOption = taskcompletion; ``` -`$curTime` can be adjusted to reflect your local time zone by adding `time()` to the product of `TimeZoneInterval_Hour` and your UTC offset. For instance, use `$curTime = time() + (-6 * TimeInterval_Hour);` for Mountain Daylight Time (MDT). Keep in mind that the offset would need to be adjusted at the start and end of daylight saving time (if applicable). +`$curTime` can be adjusted to reflect your local time zone by adding `time()` to the product of `TimeZoneInterval_Hour` and your UTC offset. For instance, use `$curTime = time() + (-6 * TimeInterval_Hour);` for Mountain Daylight Time (MDT). Keep in mind that the offset needs to be adjusted at the start and end of daylight saving time, if applicable. ### Example 2: Task-based adjustment -In this C# example, the pool size is adjusted based on the number of tasks in the queue. We've included both comments and line breaks in the formula strings. +In this C# example, the pool size is adjusted based on the number of tasks in the queue. Both comments and line breaks are included in the formula strings. ```csharp // Get pending tasks for the past 15 minutes. $samples = $PendingTasks.GetSamplePercent(TimeInterval_Minute * 15);-// If we have fewer than 70 percent data points, we use the last sample point, -// otherwise we use the maximum of last sample point and the history average. +// If you have fewer than 70 percent data points, use the last sample point, +// otherwise use the maximum of last sample point and the history average. $tasks = $samples < 70 ? max(0,$PendingTasks.GetSample(1)) : max( $PendingTasks.GetSample(1), avg($PendingTasks.GetSample(TimeInterval_Minute * 15))); // If number of pending tasks is not 0, set targetVM to pending tasks, otherwise // half of current dedicated. $NodeDeallocationOption = taskcompletion; ### Example 3: Accounting for parallel tasks -This C# example adjusts the pool size based on the number of tasks. This formula also takes into account the [TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) value that has been set for the pool. This approach is useful in situations where [parallel task execution](batch-parallel-node-tasks.md) has been enabled on your pool. +This C# example adjusts the pool size based on the number of tasks. This formula also takes into account the [TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) value that's been set for the pool. This approach is useful in situations where [parallel task execution](batch-parallel-node-tasks.md) has been enabled on your pool. ```csharp // Determine whether 70 percent of the samples have been recorded in the past Specifically, this formula does the following: - Sets the initial pool size to four nodes. - Doesn't adjust the pool size within the first 10 minutes of the pool's lifecycle. - After 10 minutes, obtains the max value of the number of running and active tasks within the past 60 minutes.- - If both values are 0 (indicating that no tasks were running or active in the last 60 minutes), the pool size is set to 0. + - If both values are 0, indicating that no tasks were running or active in the last 60 minutes, the pool size is set to 0. - If either value is greater than zero, no change is made. ```csharp string formula = string.Format(@" ## Next steps - Learn how to [execute multiple tasks simultaneously on the compute nodes in your pool](batch-parallel-node-tasks.md). Along with autoscaling, this can help to lower job duration for some workloads, saving you money.-- Learn how to [query the Azure Batch service efficiently](batch-efficient-list-queries.md) for further efficiency.+- Learn how to [query the Azure Batch service efficiently](batch-efficient-list-queries.md). |
batch | Quick Create Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-terraform.md | Title: 'Quickstart: Create an Azure Batch account using Terraform' description: 'In this article, you create an Azure Batch account using Terraform' Previously updated : 4/1/2023- Last updated : 4/14/2023+ |
cdn | Create Profile Endpoint Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-terraform.md | description: 'In this article, you create an Azure CDN profile and endpoint usin Previously updated : 4/12/2023- Last updated : 4/14/2023+ |
cognitive-services | Cognitive Services Data Loss Prevention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-data-loss-prevention.md | Title: Data Loss Prevention #Required; page title is displayed in search results. Include the brand. -description: Cognitive Services Data Loss Prevention capabilities allow customers to configure the list of outbound URLs their Cognitive Services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss. #Required; article description that is displayed in search results. ---- Previously updated : 03/31/2023 #Required; mm/dd/yyyy format.-+ Title: Data Loss Prevention +description: Cognitive Services Data Loss Prevention capabilities allow customers to configure the list of outbound URLs their Cognitive Services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss. ++++ Last updated : 03/31/2023+ # Configure data loss prevention for Azure Cognitive Services |
cognitive-services | Create Account Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-terraform.md | keywords: cognitive services, cognitive solutions, cognitive intelligence, cogni Previously updated : 3/29/2023- Last updated : 4/14/2023+ |
cognitive-services | Content Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md | |
communication-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md | Title: What's new in Azure Communication Services #Required; page title is displayed in search results. Include the brand. -description: All of the latest additions to Azure Communication Services #Required; article description that is displayed in search results. ---- Previously updated : 03/12/2023 #Required; mm/dd/yyyy format.-+ Title: What's new in Azure Communication Services +description: All of the latest additions to Azure Communication Services ++++ Last updated : 03/12/2023+ |
container-apps | Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/alerts.md | Title: Set up alerts in Azure Container Apps description: Set up alerts to monitor your container app. -+ Last updated 08/30/2022-+ # Set up alerts in Azure Container Apps |
container-apps | Azure Arc Enable Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md | Title: 'Tutorial: Enable Azure Container Apps on Azure Arc-enabled Kubernetes' description: 'Tutorial: learn how to set up Azure Container Apps in your Azure Arc-enabled Kubernetes clusters.' -+ Last updated 3/24/2023-+ # Tutorial: Enable Azure Container Apps on Azure Arc-enabled Kubernetes (Preview) |
container-apps | Container Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/container-console.md | Title: Connect to a container console in Azure Container Apps description: Connect to a container console in your container app. -+ Last updated 08/30/2022-+ |
container-apps | Containerapp Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containerapp-up.md | Title: Deploy Azure Container Apps with the az containerapp up command description: How to deploy a container app with the az containerapp up command -+ Last updated 11/08/2022-+ # Deploy Azure Container Apps with the az containerapp up command |
container-apps | Dapr Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-github-actions.md | Title: Tutorial - Deploy a Dapr application with GitHub Actions for Azure Container Apps description: Learn about multiple revision management by deploying a Dapr application with GitHub Actions and Azure Container Apps. --++ |
container-apps | Dapr Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md | This guide provides insight into core Dapr concepts and details regarding the Da | [**Secrets**][dapr-secrets] | Access secrets from your application code or reference secure values in your Dapr components. | > [!NOTE]-> The above table covers stable Dapr APIs. To learn more about using alpha APIs and features, [see limitations](#unsupported-dapr-capabilities). +> The above table covers stable Dapr APIs. To learn more about using alpha APIs and features, [see the Dapr FAQ][dapr-faq]. ## Dapr concepts overview This resource defines a Dapr component called `dapr-pubsub` via ARM. -## Release cadence for Dapr --The latest version of Dapr in Azure Container Apps will be available within six weeks after [the Dapr OSS release][dapr-release]. - ## Limitations ### Unsupported Dapr capabilities The latest version of Dapr in Azure Container Apps will be available within six - **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec. - **Declarative pub/sub subscriptions** - **Any Dapr sidecar annotations not listed above**-- **Alpha APIs and components**: Azure Container Apps doesn't guarantee the availability of Dapr alpha APIs and features. If available to use, they are on a self-service, opt-in basis. Alpha APIs and components are provided "as is" and "as available," and are continually evolving as they move toward stable status. Alpha APIs and components aren't covered by customer support.+- **Alpha APIs and components**: Azure Container Apps doesn't guarantee the availability of Dapr alpha APIs and features. For more information, refer to the [Dapr FAQ][dapr-faq]. ### Known limitations Now that you've learned about Dapr and some of the challenges it solves: - Try [Deploying a Dapr application to Azure Container Apps using the Azure CLI][dapr-quickstart] or [Azure Resource Manager][dapr-arm-quickstart]. - Walk through a tutorial [using GitHub Actions to automate changes for a multi-revision, Dapr-enabled container app][dapr-github-actions]. - Learn how to [perform event-driven work using Dapr bindings][dapr-bindings-tutorial]+- [Answer common questions about the Dapr integration with Azure Container Apps][dapr-faq] <!-- Links Internal --> Now that you've learned about Dapr and some of the challenges it solves: [dapr-arm-quickstart]: ./microservices-dapr-azure-resource-manager.md [dapr-github-actions]: ./dapr-github-actions.md [dapr-bindings-tutorial]: ./microservices-dapr-bindings.md+[dapr-faq]: ./faq.yml#dapr <!-- Links External --> |
container-apps | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md | Title: 'Quickstart: Deploy your first container app with containerapp up' description: Deploy your first application to Azure Container Apps using the Azure CLI containerapp up command. -+ Last updated 03/29/2023-+ ms.devlang: azurecli |
container-apps | Log Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-monitoring.md | Title: Monitor logs in Azure Container Apps with Log Analytics description: Monitor your container app logs with Log Analytics -+ Last updated 08/30/2022-+ # Monitor logs in Azure Container Apps with Log Analytics |
container-apps | Log Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-options.md | Title: Log storage and monitoring options in Azure Container Apps description: Description of logging options in Azure Container Apps -+ Last updated 09/29/2022-+ # Log storage and monitoring options in Azure Container Apps |
container-apps | Log Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-streaming.md | Title: View log streams in Azure Container Apps description: View your container app's log stream. -+ Last updated 03/24/2023-+ # View log streams in Azure Container Apps |
container-apps | Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/logging.md | Title: Application logging in Azure Container Apps description: Description of logging in Azure Container Apps -+ Last updated 09/29/2022-+ # Application Logging in Azure Container Apps |
container-apps | Managed Identity Image Pull | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md | Title: Azure Container Apps image pull from Azure Container Registry with managed identity description: Set up Azure Container Apps to authenticate Azure Container Registry image pulls with managed identity -+ Last updated 09/16/2022-+ zone_pivot_groups: container-apps-interface-types |
container-apps | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md | Title: Managed identities in Azure Container Apps description: Using managed identities in Container Apps -+ Last updated 09/29/2022-+ # Managed identities in Azure Container Apps |
container-apps | Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/metrics.md | Title: Monitor Azure Container Apps metrics description: Monitor your running apps metrics -+ Last updated 08/30/2022-+ # Monitor Azure Container Apps metrics |
container-apps | Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md | Title: Observability in Azure Container Apps description: Monitor your running app in Azure Container Apps -+ Last updated 07/29/2022-+ # Observability in Azure Container Apps |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources.--++ Last updated 02/21/2023 |
container-apps | Quickstart Code To Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md | Title: "Quickstart: Build and deploy your app from a repository to Azure Container Apps" description: Build your container app from a local or GitHub source repository and deploy in Azure Container Apps using az containerapp up. -+ Last updated 03/29/2023-+ zone_pivot_groups: container-apps-image-build-from-repo |
container-apps | Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md | Title: 'Quickstart: Deploy your first container app using the Azure portal' description: Deploy your first application to Azure Container Apps using the Azure portal. -+ Last updated 12/13/2021-+ |
container-apps | Service Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-connector.md | Title: Connect a container app to a cloud service with Service Connector description: Learn to connect a container app to an Azure service using the Azure portal or the CLI.--++ Last updated 06/16/2022 |
container-instances | Container Instances Quickstart Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-terraform.md | Title: 'Quickstart: Create an Azure Container Instance with a public IP address description: 'In this article, you create an Azure Container Instance with a public IP address using Terraform' Previously updated : 3/16/2023- Last updated : 4/14/2023+ |
cosmos-db | Continuous Backup Restore Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md | description: Azure Cosmos DB's point-in-time restore feature helps to recover da Previously updated : 03/02/2023 Last updated : 03/31/2023 The time window available for restore (also known as retention period) is the lo The selected option depends on the chosen tier of continuous backup. The point in time for restore can be any timestamp within the retention period no further back than the point when the resource was created. In strong consistency mode, backups taken in the write region are more up to date when compared to the read regions. Read regions can lag behind due to network or other transient issues. While doing restore, you can [get the latest restorable timestamp](get-latest-restore-timestamp.md) for a given resource in a specific region. Getting the latest timestamp ensures that the resource has taken backups up to the given timestamp, and can restore in that region. -Currently, you can restore an Azure Cosmos DB account (API for NoSQL or MongoDB) contents at a specific point in time to another account. You can perform this restore operation via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). API for Table or Gremlin are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell). +Currently, you can restore an Azure Cosmos DB account (API for NoSQL or MongoDB, API for Table, API for Gremlin) contents at a specific point in time to another account. You can perform this restore operation via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). ## Backup storage redundancy Currently the point in time restore functionality has the following limitations: * Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. API for Cassandra isn't supported now. -* API for Table and Gremlin are in preview and supported via PowerShell and Azure CLI. - * Multi-regions write accounts aren't supported. * Currently Azure Synapse Link isn't fully compatible with continuous backup mode. For more information about backup with analytical store, see [analytical store backup](analytical-store-introduction.md#backup). |
cosmos-db | Continuous Backup Restore Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md | description: Learn how to isolate and restrict the restore permissions for conti Previously updated : 02/17/2023 Last updated : 03/31/2023 -Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope or more granularly at the source account scope as shown in the following image: +Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. These permissions can be applied at the subscription scope or more granularly at the source account scope as shown in the following image: :::image type="content" source="./media/continuous-backup-restore-permissions/restore-roles-permissions.svg" alt-text="List of roles required to perform restore operation." border="false"::: Scope is a set of resources that have access, to learn more on scopes, see the [ ## Assign roles for restore using the Azure portal -To perform a restore, a user or a principal need the permission to restore (that is *restore/action* permission), and permission to provision a new account (that is *write* permission). To grant these permissions, the owner can assign the `CosmosRestoreOperator` and `Cosmos DB Operator` built in roles to a principal. +To perform a restore, a user or a principal need the permission to restore (that is *restore/action* permission), and permission to provision a new account (that is *write* permission). To grant these permissions, the owner of the subscription can assign the `CosmosRestoreOperator` and `Cosmos DB Operator` built in roles to a principal. 1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your subscription. The `CosmosRestoreOperator` role is available at subscription level. Following permissions are required to perform the different activities pertainin Roles with permission can be assigned to different scopes to achieve granular control on who can perform the restore operation within a subscription or a given account. ### Assign capability to restore from any restorable account in a subscription-- Assign a user write action on the specific resource group. This action is required to create a new account in the resource group.-- Assign the `CosmosRestoreOperator` built in role to the specific restorable database account that needs to be restored. In the following command, the scope for the `RestorableDatabaseAccount` is extracted from the `ID` property of result of execution of `az cosmosdb restorable-database-account list`(if using CLI) or `Get-AzCosmosDBRestorableDatabaseAccount`(if using the PowerShell) -Assign the `CosmosRestoreOperator` built-in role at subscription level +- Assign the `CosmosRestoreOperator` built in role to the specific subscription level ```azurecli-interactive az role assignment create --role "CosmosRestoreOperator" --assignee <email> --scope /subscriptions/<subscriptionId> ``` -### Assign capability to restore from a specific account -This operation is currently not supported. +### Assign capability to restore from a specific account +- Assign a user write action on the specific resource group. This action is required to create a new account in the resource group. +- Assign the `CosmosRestoreOperator` built in role to the specific restorable database account that needs to be restored. In the following command, the scope for the `RestorableDatabaseAccount` is extracted from the `ID` property of result of execution of `az cosmosdb restorable-database-account list`(if using CLI) or `Get-AzCosmosDBRestorableDatabaseAccount`(if using the PowerShell) ++```azurecli-interactive +az role assignment create --role "CosmosRestoreOperator" --assignee <email> --scope <RestorableDatabaseAccount> +``` ### Assign capability to restore from any source account in a resource group. This operation is currently not supported. |
cosmos-db | Periodic Backup Restore Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-restore-introduction.md | For Azure Synapse Link enabled accounts, analytical store data isn't included in Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). -For example, consider a scenario where Backup Retention is configured to **240 hrs** (or **10 days**) and Backup Interval is configured to **24 hours**. This configuration implies that there are **10** copies of the backup data. If you have **1 TB** of data in an Azure region, the cost for backup storage in a given month would be: `0.12 * 1000 * 8` +For example, consider a scenario where Backup Retention is configured to **240 hrs** (or **10 days**) and Backup Interval is configured to **24 hours**. This configuration implies that there are **10** copies of the backup data. If you have **1 TB** of data in an Azure West US region, the cost for backup storage in a given month would be: `0.12 * 1000 * 8` ## Required permissions to manage retention or restoration |
cost-management-billing | Cost Management Billing Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md | Microsoft offers a wide range of tools for optimizing your costs. Some of these - There are many [**free services**](https://azure.microsoft.com/pricing/free-services/) available in Azure. Be sure to pay close attention to the constraints. Different services are free indefinitely, for 12 months, or 30 days. Some are free up to a specific amount of usage and some may have dependencies on other services that aren't free. - The [**Azure pricing calculator**](https://azure.microsoft.com/pricing/calculator/) is the best place to start when planning a new deployment. You can tweak many aspects of the deployment to understand how you'll be charged for that service and identify which SKUs/options will keep you within your desired price range. For more information about pricing for each of the services you use, see [pricing details](https://azure.microsoft.com/pricing/). - [**Azure Advisor cost recommendations**](./costs/tutorial-acm-opt-recommendations.md) should be your first stop when interested in optimizing existing resources. Advisor recommendations are updated daily and are based on your usage patterns. Advisor is available for subscriptions and resource groups. Management group users can also see recommendations but will need to select the desired subscriptions. Billing users can only see recommendations for subscriptions they have resource access to.-- [**Azure saving plans**](./savings-plan/index.yml) save you money when you have consistent usage of Azure compute resources. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices.+- [**Azure savings plans**](./savings-plan/index.yml) save you money when you have consistent usage of Azure compute resources. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. - [**Azure reservations**](https://azure.microsoft.com/reservations/) help you save up to 72% compared to pay-as-you-go rates by pre-committing to specific usage amounts for a set time duration. - [**Azure Hybrid Benefit**](https://azure.microsoft.com/pricing/hybrid-benefit/) helps you significantly reduce costs by using on-premises Windows Server and SQL Server licenses or RedHat and SUSE Linux subscriptions on Azure. |
cost-management-billing | Enable Preview Features Cost Management Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md | Once you know which resources you'd like to group, use the following steps to ta 2. Select **Properties** in the resource menu. 3. Find the **Resource ID** property and copy its value. 4. Open **All resources** or the resource group that has the resources you want to link.-5. Select the checkboxes for every resource you want to link and click the **Assign tags** command. +5. Select the checkboxes for every resource you want to link and then select the **Assign tags** command. 6. Specify a tag key of "cm-resource-parent" (make sure it's typed correctly) and paste the resource ID from step 3. 7. Wait 24 hours for new usage to be sent to Cost Management with the tags. (Keep in mind resources must be actively running with charges for tags to be updated in Cost Management.) 8. Open the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview. Cost insights surface important details about your subscriptions, like potential ## View cost for your resources -Cost analysis is available from every management group, subscription, resource group, and billing scope in the Azure portal and the Microsoft 365 admin center. To make cost data more readily accessible for resource owners, you can now find a **View cost** link at the top-right of every resource overview screen, in **Essentials**. Clicking the link will open classic cost analysis with a resource filter applied. +Cost analysis is available from every management group, subscription, resource group, and billing scope in the Azure portal and the Microsoft 365 admin center. To make cost data more readily accessible for resource owners, you can now find a **View cost** link at the top-right of every resource overview screen, in **Essentials**. Select the link to open classic cost analysis with a resource filter applied. The view cost link is enabled by default in the [Azure preview portal](https://preview.portal.azure.com). |
cost-management-billing | Exchange And Refund Azure Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md | However, you can't exchange dissimilar reservations. For example, you can't exch You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 region for one that's in West Europe region. > [!NOTE]-> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md). +> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to use instance size flexibility for VM sizes, but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md). > > You may [trade-in](../savings-plan/reservation-trade-in.md) your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration you’ll need and want additional savings. Learn more about [Azure savings plan for compute and how it works with reservations](../savings-plan/index.yml). |
cost-management-billing | Reservation Exchange Policy Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-exchange-policy-changes.md | Exchanges will be unavailable for all compute reservations - Azure Reserved Virt Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plans providing the flexibility automatically, weΓÇÖre adjusting our reservations exchange policy. -You can continue to exchange VM sizes (with instance size flexibility). However, Microsoft is ending exchanges for regions and instance series for these Azure compute reservations. +You can continue to use instance size flexibility for VM sizes, but Microsoft is ending exchanges for regions and instance series for these Azure compute reservations. The current cancellation policy for reserved instances isn't changing. The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment. |
cost-management-billing | Reservation Trade In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md | The following reservations aren't eligible to be traded in for savings plans: - SUSE Linux plans > [!NOTE]-> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. +> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to use instance size flexibility for VM sizes, but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. > > You may trade-in your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration you’ll need and want additional savings. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md). |
cost-management-billing | Download Azure Daily Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md | Use the following information to download usage for billed charges. The same ste 1. In the invoice grid, find the row of the invoice corresponding to the usage file that you want to download. 1. Select the ellipsis symbol (`...`) at the end of the row. 1. In the context menu, select **Prepare Azure usage file**. A notification message appears stating that the usage file is being prepared.-1. When the file is ready to download, select the **Click here to download** link in the notification. If you missed the notification, you can view it from **Notifications** area in top right of the Azure portal (the bell symbol). +1. When the file is ready to download, select **Download**. If you missed the notification, you can view it from **Notifications** area in top right of the Azure portal (the bell symbol). ## Get usage data with Azure CLI |
data-factory | Data Flow Parse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-parse.md | In the parse transformation configuration panel, you'll first pick the type of d ### Column -Similar to derived columns and aggregates, this is where you'll either modify an exiting column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the parsed source data in this column. In most cases, you'll want to define a new column that parses the incoming embedded document string field. +Similar to derived columns and aggregates, this is where you'll either modify an existing column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the parsed source data in this column. In most cases, you'll want to define a new column that parses the incoming embedded document string field. ### Expression |
data-factory | How To Manage Studio Preview Exp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md | The monitoring experience remains the same as detailed [here](monitor-visually.m #### Error message relocation to Status column +> [!NOTE] +> This feature is now generally available in the ADF studio. + To make it easier for you to view errors when you see a **Failed** pipeline run, error messages have been relocated to the **Status** column. Find the error icon in the pipeline monitoring page and in the pipeline **Output** tab after debugging your pipeline. Find the error icon in the pipeline monitoring page and in the pipeline **Output #### Container view > [!NOTE]-> This feature will now be generally available in the ADF studio. +> This feature is now generally available in the ADF studio. When monitoring your pipeline run, you have the option to enable the container view, which will provide a consolidated view of the activities that ran. This view is available in the output of your pipeline debug run and in the detailed monitoring view found in the monitoring tab. Click the button next to the iteration or conditional activity to collapse the n #### Simplified default monitoring view -The default monitoring view has been simplified with fewer default columns. You can add/remove columns if youΓÇÖd like to personalize your monitoring view. Changes to the default will be cached. +The default monitoring view has been simplified with fewer default columns. You can add/remove columns if youΓÇÖd like to personalize your monitoring view. Changes to the default will be cached. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-20.png" alt-text="Screenshot of the new default column view on the monitoring page."::: Add columns by clicking **Add column** or remove columns by clicking the trashca :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-22.png" alt-text="Screenshot of the Add column button and trashcan icon to edit column view."::: +You can also now view **Pipeline run details** in a new pane in the detailed pipeline monitoring view by clicking **View run detail**. ++ ## Provide feedback We want to hear from you! If you see this pop-up, please let us know your thoughts by providing feedback on the updates you've tested. |
data-manager-for-agri | Concepts Hierarchy Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-hierarchy-model.md | Title: Hierarchy model in Azure Data Manager for Agriculture description: Provides information on the data model to organize your agriculture data. -+ -+ Last updated 02/14/2023-+ # Hierarchy model to organize agriculture related data |
data-manager-for-agri | Concepts Ingest Satellite Imagery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md | Title: Ingesting satellite data in Azure Data Manager for Agriculture description: Provides step by step guidance to ingest Satellite data-+ -+ Last updated 02/14/2023-+ # Using satellite imagery in Azure Data Manager for Agriculture |
data-manager-for-agri | Concepts Ingest Sensor Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-sensor-data.md | Title: Ingesting sensor data in Azure Data Manager for Agriculture description: Provides step by step guidance to ingest Sensor data.-+ -+ Last updated 02/14/2023-+ # Ingesting sensor data |
data-manager-for-agri | Concepts Ingest Weather Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-weather-data.md | description: Learn how to fetch weather data from various weather data providers -+ Last updated 02/14/2023-+ # Weather data overview |
data-manager-for-agri | Concepts Isv Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md | Title: ISV solution framework in Azure Data Manager for Agriculture description: Provides information on using solutions from ISVs -+ -+ Last updated 02/14/2023-+ # What is our Solution Framework? |
data-manager-for-agri | How To Set Up Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-audit-logs.md | Title: Enable logging for Azure Data Manager for Agriculture description: Learn how enable logging and debugging in Azure Data Manager for Agriculture-+ -+ Last updated 04/10/2023-+ # Azure Data Manager for Agriculture logging |
data-manager-for-agri | How To Set Up Isv Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-isv-solution.md | Title: Use ISV solutions with Data Manager for Agriculture. description: Learn how to use APIs from a third-party solution-+ -+ Last updated 02/14/2023-+ # How do I use an ISV solution? |
data-manager-for-agri | How To Set Up Private Links | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-private-links.md | Title: Creating a private endpoint for Azure Data Manager for Agriculture description: Learn how to use private links in Azure Data Manager for Agriculture-+ -+ Last updated 03/22/2023-+ # Create a private endpoint for Azure Data Manager for Agriculture |
data-manager-for-agri | How To Set Up Sensors Customer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-customer.md | Title: How to set up a sensor in Azure Data Manager for Agriculture description: Provides step by step guidance to integrate Sensor as a customer-+ -+ Last updated 02/14/2023-+ # Sensor integration as a customer |
data-manager-for-agri | How To Set Up Sensors Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-partner.md | description: Provides guidance to set up your sensors as a partner -+ Last updated 02/14/2023-+ # Sensor partner integration flow |
data-manager-for-agri | How To Use Nutrient Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-nutrient-apis.md | Title: Use plant tissue nutrients APIs in Azure Data Manager for Agriculture description: Learn how to store nutrient data in Azure Data Manager for Agriculture-+ -+ Last updated 02/14/2023-+ # Using tissue samples data |
data-manager-for-agri | How To Write Weather Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-write-weather-extension.md | description: Provides guidance to use weather extension -+ Last updated 02/14/2023-+ # How to write a weather extension |
data-manager-for-agri | Overview Azure Data Manager For Agriculture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/overview-azure-data-manager-for-agriculture.md | Title: What is Microsoft Azure Data Manager for Agriculture? #Required; page title is displayed in search results. Include the brand. -description: About Azure Data Manager for Agriculture #Required; article description that is displayed in search results. -+ Title: What is Microsoft Azure Data Manager for Agriculture? +description: About Azure Data Manager for Agriculture + -- Previously updated : 02/14/2023 #Required; mm/dd/yyyy format.-++ Last updated : 02/14/2023+ # What is Azure Data Manager for Agriculture Preview? |
data-manager-for-agri | Quickstart Install Data Manager For Agriculture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/quickstart-install-data-manager-for-agriculture.md | Title: How to install Azure Data Manager for Agriculture description: Provides step by step guidance to install Data Manager for Agriculture-+ Last updated 04/05/2023-+ # Quickstart install Azure Data Manager for Agriculture preview |
databox-online | Azure Stack Edge Pro 2 Deploy Configure Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md | Before you set up a compute role on your Azure Stack Edge Pro device, make sure - Enabled a network interface for compute. - Assigned Kubernetes node IPs and Kubernetes external service IPs. + > [!NOTE] + > If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device. + ## Configure compute [!INCLUDE [configure-compute](../../includes/azure-stack-edge-gateway-configure-compute.md)] |
ddos-protection | Manage Ddos Protection Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-terraform.md | In this article, you learn how to: -Name $ddos_protection_plan_name ``` -1. Get the virtual network name. -- ```console - $virtual_network_name=$(terraform output -raw virtual_network_name) - ``` --1. Run [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to display information about the new virtual network. -- ```azurepowershell - Get-AzVirtualNetwork -ResourceGroupName $resource_group_name ` - -Name $virtual_network_name - ``` - ## Clean up resources |
defender-for-iot | Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md | While you can view alert details, investigate alert context, and triage and mana |**OT network sensor consoles** | Alerts generated by that OT sensor | - View the alert's source and destination in the **Device map** <br>- View related events on the **Event timeline** <br>- Forward alerts directly to partner vendors <br>- Create alert comments <br> - Create custom alert rules <br>- Unlearn alerts | |**An on-premises management console** | Alerts generated by connected OT sensors | - Forward alerts directly to partner vendors <br> - Create alert exclusion rules | -For more information, see [Accelerating OT alert workflows](#accelerating-ot-alert-workflows) and [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options) below. +For more information, see: ++- [Alert data retention](references-data-retention.md#alert-data-retention) +- [Accelerating OT alert workflows](#accelerating-ot-alert-workflows) +- [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options) Alert options also differ depending on your location and user role. For more information, see [Azure user roles and permissions](roles-azure.md) and [On-premises users and roles](roles-on-premises.md). |
defender-for-iot | Sample Connectivity Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/sample-connectivity-models.md | The following diagram shows an example of a ring network topology, in which each ## Sample: Linear bus and star topology -In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches aren't monitored, and traffic that remains local to these switches won't be seen. Devices might be identified based on ARP messages, but connection information will be missing. +In a star network such as the one shown in the diagram below, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches aren't monitored, and traffic that remains local to these switches won't be seen. Devices might be identified based on ARP messages, but connection information will be missing. :::image type="content" source="../media/how-to-set-up-your-network/linear-bus-star-topology.png" alt-text="Diagram of the linear bus and star topology." border="false" lightbox="../media/how-to-set-up-your-network/linear-bus-star-topology.png"::: ## Sample: Multi-layer, multi-tenant network -The following diagram is a general abstraction of a multilayer, multitenant network, with an expansive cybersecurity ecosystem typically operated by an SOC and MSSP. --Typically, NTA sensors are deployed in layers 0 to 3 of the OSI model. +The following diagram is a general abstraction of a multilayer, multi-tenant network, with an expansive cybersecurity ecosystem typically operated by an security operations center (SOC) and managed security service provider (MSSP). Defender for IoT sensors are typically deployed in layers 0 to 3 of the OSI model. :::image type="content" source="../media/how-to-set-up-your-network/osi-model.png" alt-text="Diagram of the OSI model." lightbox="../media/how-to-set-up-your-network/osi-model.png" border="false"::: ## Next steps -After you've [understood your own network's OT architecture](understand-network-architecture.md) and [planned out your deployment](plan-network-monitoring.md), learn more about methods for traffic mirroring and passive or active monitoring. --For more information, see: --- [Traffic mirroring methods for OT monitoring](traffic-mirroring-methods.md)+For more information, see [Traffic mirroring methods for OT monitoring](traffic-mirroring-methods.md). |
defender-for-iot | Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/compliance.md | + + Title: Compliance - Microsoft Defender for IoT +description: Learn about compliance resources for Microsoft Defender for IoT. + Last updated : 04/14/2023+++# Microsoft Defender for IoT compliance resources ++Defender for IoT cloud services, formerly named *Azure Defender for IoT* or *Azure Security for IoT*, are based on Microsoft AzureΓÇÖs infrastructure, which meets demanding US government and international compliance requirements that produce formal authorizations. ++## Provisional authorizations + +Defender for IoT is in scope for the following provisional authorizations in Azure and Azure Government: ++- [FedRAMP High](/azure/compliance/offerings/offering-fedramp) +- [DoD IL2](/azure/compliance/offerings/offering-dod-il2) ++Moreover, Defender for IoT maintains extra [DoD IL4](/azure/compliance/offerings/offering-dod-il4) and [DoD IL5](/azure/compliance/offerings/offering-dod-il5) provisional authorizations in Azure Government. ++For more information, see [Azure and other Microsoft cloud services compliance scope](/azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-public-services-by-audit-scope). +++## Accessibility ++Defender for IoT is committed to developing technology that empowers everyone, including people with disabilities, and helps customers address global accessibility requirements. ++For more information, search for *Azure Security for IoT* in [Accessibility Conformance Reports | Microsoft Accessibility](https://www.microsoft.com/accessibility/conformance-reports?rtc=1). ++## Cloud compliance ++Defender for IoT helps customers meet their compliance obligations across regulated industries and markets worldwide. ++For more information, see [Azure and other Microsoft cloud services compliance offerings](/azure/compliance/offerings/). ++## Next steps ++> [!div class="nextstepaction"] +> [Welcome to Microsoft Defender for IoT for organizations](overview.md) |
defender-for-iot | How To Accelerate Alert Incident Response | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md | For more information, see ## Next steps -> [!div class="nextstepaction"] -> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md) --> [!div class="nextstepaction"] -> [View and manage alerts on your OT sensor](how-to-view-alerts.md) --> [!div class="nextstepaction"] -> [Forward alert information](how-to-forward-alert-information-to-partners.md) --> [!div class="nextstepaction"] -> [OT monitoring alert types and descriptions](alert-engine-messages.md) --> [!div class="nextstepaction"] -> [View and manage alerts on the the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md) - > [!div class="nextstepaction"] > [Microsoft Defender for IoT alerts](alerts.md) |
defender-for-iot | How To Forward Alert Information To Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md | If your forwarding alert rules aren't working as expected, check the following d ## Next steps -> [!div class="nextstepaction"] -> [Microsoft Defender for IoT alerts](alerts.md) --> [!div class="nextstepaction"] -> [View and manage alerts on your OT sensor](how-to-view-alerts.md) --> [!div class="nextstepaction"] -> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md) --> [!div class="nextstepaction"] -> [OT monitoring alert types and descriptions](alert-engine-messages.md) -- > [!div class="nextstepaction"] > [Microsoft Defender for IoT alerts](alerts.md) |
defender-for-iot | How To Investigate All Enterprise Sensor Detections In A Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md | For more information, see [Defender for IoT sensor and management console APIs]( For more information, see: +- [Defender for IoT device inventory](device-inventory.md) - [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) - [Device data retention periods](references-data-retention.md#device-data-retention-periods). |
defender-for-iot | How To Investigate Sensor Detections In A Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md | All devices detected within the range of the filter will be deleted. If you dele For more information, see: +- [Defender for IoT device inventory](device-inventory.md) - [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) - [Device data retention periods](references-data-retention.md#device-data-retention-periods) |
defender-for-iot | How To Manage Cloud Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md | The file is generated, and you're prompted to save it locally. ## Next steps -> [!div class="nextstepaction"] -> [Forward alert information](how-to-forward-alert-information-to-partners.md) --> [!div class="nextstepaction"] -> [OT monitoring alert types and descriptions](alert-engine-messages.md) - > [!div class="nextstepaction"] > [Microsoft Defender for IoT alerts](alerts.md)--> [!div class="nextstepaction"] -> [Data retention across Microsoft Defender for IoT](references-data-retention.md) |
defender-for-iot | How To Manage Device Inventory For Organizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md | The merged device that is now listed in the grid retains the details of the devi For more information, see: +- [Defender for IoT device inventory](device-inventory.md) - [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) - [Device data retention periods](references-data-retention.md#device-data-retention-periods). |
defender-for-iot | How To Manage Sensors On The Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md | If you need to open a support ticket for a locally managed sensor, upload a diag ## Next steps -> [!div class="nextstepaction"] -> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md) - > [!div class="nextstepaction"] > [Define and view OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md) > [!div class="nextstepaction"]-> [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md) +> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md) |
defender-for-iot | How To Track Sensor Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-track-sensor-activity.md | -The event timeline provides a chronological view and context of all network activity, to help determine the cause and effect of incidents. The timeline view makes it easy to extract information from network events, and more efficiently analyze alerts and events observed on the network. With the ability to store vast amounts of data, the event timeline view can be a valuable resource for security teams to perform investigations and gain a deeper understanding of network activity. +The OT sensor's event timeline provides a chronological view and context of all network activity, to help determine the cause and effect of incidents. The timeline view makes it easy to extract information from network events, and more efficiently analyze alerts and events observed on the network. With the ability to store vast amounts of data, the event timeline view can be a valuable resource for security teams to perform investigations and gain a deeper understanding of network activity. Use the event timeline during investigations, to understand and analyze the chain of events that preceded and followed an attack or incident. The centralized view of multiple security-related events on the same timeline helps to identify patterns and correlations, and enable security teams to quickly assess the impact of incidents and respond accordingly. -Enhance your security analysis and incident investigations with the event timeline, with the following options: +For more information, see: - [View events on the timeline](#view-the-event-timeline)- - [Audit user activity](track-user-activity.md)- - [View and manage alerts](how-to-view-alerts.md#view-details-and-remediate-a-specific-alert)- - [Analyze programming details and changes](how-to-analyze-programming-details-changes.md) ## Permissions -Administrator or Security Analyst permissions are required to perform the procedures described in this article. +Before you perform the procedures described in this article, make sure that you have access to an OT sensor as an **Admin** or **Security Analyst** role. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). ## View the event timeline The maximum number of events shown in the event timeline is dependent on [the ha ## Next steps -[Audit user activity](track-user-activity.md) +For more information, see: -[View details and remediate a specific alert](how-to-view-alerts.md#view-details-and-remediate-a-specific-alert) --[Analyze programming details and changes](how-to-analyze-programming-details-changes.md) +- [Audit user activity](track-user-activity.md) +- [View details and remediate a specific alert](how-to-view-alerts.md#view-details-and-remediate-a-specific-alert) +- [Analyze programming details and changes](how-to-analyze-programming-details-changes.md) |
defender-for-iot | How To View Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-alerts.md | For more information, see [Accelerating OT alert workflows](alerts.md#accelerati ## Next steps -> [!div class="nextstepaction"] -> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md) --> [!div class="nextstepaction"] -> [View and manage alerts on the the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md) --> [!div class="nextstepaction"] -> [Accelerate alert workflows on an OT network sensor](how-to-accelerate-alert-incident-response.md) --> [!div class="nextstepaction"] -> [Forward alert information](how-to-forward-alert-information-to-partners.md) --> [!div class="nextstepaction"] -> [OT monitoring alert types and descriptions](alert-engine-messages.md) --> [!div class="nextstepaction"] -> [Microsoft Defender for IoT alerts](alerts.md) - > [!div class="nextstepaction"] > [Data retention across Microsoft Defender for IoT](references-data-retention.md) |
defender-for-iot | On Premises Sentinel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/on-premises-sentinel.md | Title: How to connect on-premises Defender for IoT resources to Microsoft Sentinel description: Learn how to stream data into Microsoft Sentinel from an on-premises and locally-managed Microsoft Defender for IoT OT network sensor or an on-premises management console.-+ Last updated 12/26/2022-+ # Connect on-premises OT network sensors to Microsoft Sentinel |
defender-for-iot | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md | For more information, see the [Microsoft Defender for IoT for device builders do Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter. -## Compliance scope --Defender for IoT cloud services (formerly *Azure Defender for IoT* or *Azure Security for IoT*) are based on Microsoft AzureΓÇÖs infrastructure, which meets demanding US government and international compliance requirements that produce formal authorizations. - -Specifically: -- Defender for IoT is in scope for the following provisional authorizations in Azure and Azure Government: [FedRAMP High](/azure/compliance/offerings/offering-fedramp) and [DoD IL2](/azure/compliance/offerings/offering-dod-il2). Moreover, Defender for IoT maintains extra [DoD IL4](/azure/compliance/offerings/offering-dod-il4) and [DoD IL5](/azure/compliance/offerings/offering-dod-il5) provisional authorizations in Azure Government. For more information, see [Azure and other Microsoft cloud services compliance scope](/azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-public-services-by-audit-scope).-- Defender for IoT is committed to developing technology that empowers everyone, including people with disabilities, and helps customers address global accessibility requirements. For more information, search for *Azure Security for IoT* in [Accessibility Conformance Reports | Microsoft Accessibility](https://www.microsoft.com/accessibility/conformance-reports?rtc=1).-- Defender for IoT helps customers meet their compliance obligations across regulated industries and markets worldwide. For more information, see [Azure and other Microsoft cloud services compliance offerings](/azure/compliance/offerings/).- ## Next steps > [!div class="nextstepaction"] |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Features released earlier than nine months ago are described in the [What's new |Service area |Updates | |||-| **OT networks** | **Sensor version 22.3.6**: <br>- [Support for transient devices](#support-for-transient-devices)<br>- [Learn DNS traffic by configuring allowlists](#learn-dns-traffic-by-configuring-allowlists)<br>- [Device data retention updates](#device-data-retention-updates)<br>- [UI enhancements when uploading SSL/TLS certificates](#ui-enhancements-when-uploading-ssltls-certificates)<br>- [Activation files expiration updates](#activation-files-expiration-updates)<br>- [UI enhancements for managing the device inventory](#ui-enhancements-for-managing-the-device-inventory)<br>- [Updated severity for all Suspicion of Malicious Activity alerts](#updated-severity-for-all-suspicion-of-malicious-activity-alerts)<br>- [Automatically resolved device notifications](#automatically-resolved-device-notifications) <br><br> **Cloud features**: <br>- [New Microsoft Sentinel incident experience for Defender for IoT](#new-microsoft-sentinel-incident-experience-for-defender-for-iot) | +| **OT networks** | **Sensor version 22.3.6/22.3.7**: <br>- [Support for transient devices](#support-for-transient-devices)<br>- [Learn DNS traffic by configuring allowlists](#learn-dns-traffic-by-configuring-allowlists)<br>- [Device data retention updates](#device-data-retention-updates)<br>- [UI enhancements when uploading SSL/TLS certificates](#ui-enhancements-when-uploading-ssltls-certificates)<br>- [Activation files expiration updates](#activation-files-expiration-updates)<br>- [UI enhancements for managing the device inventory](#ui-enhancements-for-managing-the-device-inventory)<br>- [Updated severity for all Suspicion of Malicious Activity alerts](#updated-severity-for-all-suspicion-of-malicious-activity-alerts)<br>- [Automatically resolved device notifications](#automatically-resolved-device-notifications) <br><br>Version 22.3.7 includes the same features as 22.3.6. If you have version 22.3.6 installed, we strongly recommend that you update to version 22.3.7, which also includes important bug fixes.<br><br> **Cloud features**: <br>- [New Microsoft Sentinel incident experience for Defender for IoT](#new-microsoft-sentinel-incident-experience-for-defender-for-iot) | ### Support for transient devices |
devtest-labs | Create Lab Windows Vm Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/quickstarts/create-lab-windows-vm-terraform.md | Title: 'Quickstart: Create a lab in Azure DevTest Labs using Terraform' description: 'In this article, you create a Windows virtual machine in a lab within Azure DevTest Labs using Terraform' Previously updated : 2/27/2023- Last updated : 4/14/2023+ |
dns | Dns Delegate Domain Azure Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-delegate-domain-azure-dns.md | |
dns | Dns Get Started Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-terraform.md | Title: 'Quickstart: Create an Azure DNS zone and record using Terraform' description: 'In this article, you create an Azure DNS zone and record using Terraform' Previously updated : 3/17/2023- Last updated : 4/14/2023+ |
energy-data-services | Concepts Csv Parser Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-csv-parser-ingestion.md | Title: Microsoft Azure Data Manager for Energy Preview csv parser ingestion workflow concept #Required; page title is displayed in search results. Include the brand. -description: Learn how to use CSV parser ingestion. #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview csv parser ingestion workflow concept +description: Learn how to use CSV parser ingestion. ++++ Last updated 02/10/2023-+ # CSV parser ingestion concepts |
energy-data-services | Concepts Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-ddms.md | Title: Domain data management services concepts #Required; page title is displayed in search results. Include the brand. -description: Learn how to use Domain Data Management Services #Required; article description that is displayed in search results. ----+ Title: Domain data management services concepts +description: Learn how to use Domain Data Management Services ++++ Last updated 08/18/2022-+ # Domain data management service concepts |
energy-data-services | Concepts Entitlements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md | Title: Microsoft Azure Data Manager for Energy Preview entitlement concepts #Required; page title is displayed in search results. Include the brand. -description: This article describes the various concepts regarding the entitlement services in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview entitlement concepts +description: This article describes the various concepts regarding the entitlement services in Azure Data Manager for Energy Preview ++++ Last updated 02/10/2023-+ # Entitlement service |
energy-data-services | Concepts Index And Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-index-and-search.md | Title: Microsoft Azure Data Manager for Energy Preview - index and search workflow concepts #Required; page title is displayed in search results. Include the brand. -description: Learn how to use indexing and search workflows #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview - index and search workflow concepts +description: Learn how to use indexing and search workflows ++++ Last updated 02/10/2023-+ #Customer intent: As a developer, I want to understand indexing and search workflows so that I could search for ingested data in the platform. |
energy-data-services | Concepts Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-manifest-ingestion.md | Title: Microsoft Azure Data Manager for Energy Preview manifest ingestion concepts #Required; page title is displayed in search results. Include the brand. -description: This article describes manifest ingestion concepts #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview manifest ingestion concepts +description: This article describes manifest ingestion concepts ++++ Last updated 08/18/2022-+ # Manifest-based ingestion concepts |
energy-data-services | How To Convert Segy To Ovds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md | Title: Microsoft Azure Data Manager for Energy Preview - How to convert a segy to ovds file #Required; page title is displayed in search results. Include the brand. -description: This article explains how to convert a SGY file to oVDS file format #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview - How to convert a segy to ovds file +description: This article explains how to convert a SGY file to oVDS file format ++++ Last updated 08/18/2022-+ # How to convert a SEG-Y file to oVDS |
energy-data-services | How To Convert Segy To Zgy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md | Title: Microsoft Azure Data Manager for Energy Preview - How to convert segy to zgy file #Required; page title is displayed in search results. Include the brand. -description: This article describes how to convert a SEG-Y file to a ZGY file #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview - How to convert segy to zgy file +description: This article describes how to convert a SEG-Y file to a ZGY file ++++ Last updated 08/18/2022-+ # How to convert a SEG-Y file to ZGY |
energy-data-services | How To Enable Cors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-enable-cors.md | Title: How to enable CORS - Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: Guide on CORS in Azure data manager for Energy and how to set up CORS #Required; article description that is displayed in search results. ---- Previously updated : 02/28/2023 #Required; mm/dd/yyyy format.-+ Title: How to enable CORS - Azure Data Manager for Energy Preview +description: Guide on CORS in Azure data manager for Energy and how to set up CORS ++++ Last updated : 02/28/2023+ # Use CORS for resource sharing in Azure Data Manager for Energy Preview This document is to help you as user of Azure Data Manager for Energy preview to set up CORS policies. |
energy-data-services | How To Generate Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-refresh-token.md | Title: How to generate a refresh token for Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: This article describes how to generate a refresh token #Required; article description that is displayed in search results. ----+ Title: How to generate a refresh token for Microsoft Azure Data Manager for Energy Preview +description: This article describes how to generate a refresh token ++++ Last updated 10/06/2022-+ #Customer intent: As a developer, I want to learn how to generate a refresh token |
energy-data-services | How To Integrate Airflow Logs With Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-airflow-logs-with-azure-monitor.md | |
energy-data-services | How To Integrate Elastic Logs With Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-elastic-logs-with-azure-monitor.md | |
energy-data-services | How To Manage Data Security And Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-data-security-and-encryption.md | Title: Data security and encryption in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: Guide on security in Azure Data Manager for Energy Preview and how to set up customer managed keys on Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. ----+ Title: Data security and encryption in Microsoft Azure Data Manager for Energy Preview +description: Guide on security in Azure Data Manager for Energy Preview and how to set up customer managed keys on Azure Data Manager for Energy Preview ++++ Last updated 10/06/2022-+ #Customer intent: As a developer, I want to set up customer-managed keys on Azure Data Manager for Energy Preview. |
energy-data-services | How To Manage Legal Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-legal-tags.md | Title: How to manage legal tags in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: This article describes how to manage legal tags in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. ----+ Title: How to manage legal tags in Microsoft Azure Data Manager for Energy Preview +description: This article describes how to manage legal tags in Azure Data Manager for Energy Preview ++++ Last updated 02/20/2023-+ # How to manage legal tags |
energy-data-services | How To Manage Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md | Title: How to manage users in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: This article describes how to manage users in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. ----+ Title: How to manage users in Microsoft Azure Data Manager for Energy Preview +description: This article describes how to manage users in Azure Data Manager for Energy Preview ++++ Last updated 08/19/2022-+ # How to manage users |
energy-data-services | Overview Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md | Title: Overview of domain data management services - Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: This article provides an overview of Domain data management services #Required; article description that is displayed in search results. ---+ Title: Overview of domain data management services - Microsoft Azure Data Manager for Energy Preview +description: This article provides an overview of Domain data management services +++ Last updated 09/01/2022 |
energy-data-services | Overview Microsoft Energy Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-microsoft-energy-data-services.md | Title: What is Microsoft Azure Data Manager for Energy Preview? #Required; page title is displayed in search results. Include the brand. -description: This article provides an overview of Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. -+ Title: What is Microsoft Azure Data Manager for Energy Preview? +description: This article provides an overview of Azure Data Manager for Energy Preview + -- Previously updated : 02/08/2023 #Required; mm/dd/yyyy format.-++ Last updated : 02/08/2023+ # What is Azure Data Manager for Energy Preview? |
energy-data-services | Quickstart Create Microsoft Energy Data Services Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/quickstart-create-microsoft-energy-data-services-instance.md | Title: Create a Microsoft Azure Data Manager for Energy Preview instance #Required; page title is displayed in search results. Include the brand. -description: Quickly create an Azure Data Manager for Energy Preview instance #Required; article description that is displayed in search results. ----+ Title: Create a Microsoft Azure Data Manager for Energy Preview instance +description: Quickly create an Azure Data Manager for Energy Preview instance ++++ Last updated 08/18/2022-+ # Quickstart: Create an Azure Data Manager for Energy Preview Preview instance |
energy-data-services | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md | Title: Release notes for Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: This article provides release notes of Azure Data Manager for Energy Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results. ---- Previously updated : 09/20/2022 #Required; mm/dd/yyyy format.-+ Title: Release notes for Microsoft Azure Data Manager for Energy Preview +description: This article provides release notes of Azure Data Manager for Energy Preview releases, improvements, bug fixes, and known issues. ++++ Last updated : 09/20/2022+ # Release Notes for Azure Data Manager for Energy Preview |
energy-data-services | Tutorial Csv Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-csv-ingestion.md | Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a CSV parser ingestion #Required; page title is displayed in search results. Include the brand. -description: This tutorial shows you how to perform CSV parser ingestion #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a CSV parser ingestion +description: This tutorial shows you how to perform CSV parser ingestion ++++ Last updated 09/19/2022-+ #Customer intent: As a customer, I want to learn how to use CSV parser ingestion so that I can load CSV data into the Azure Data Manager for Energy Preview instance. |
energy-data-services | Tutorial Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-manifest-ingestion.md | Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a manifest-based file ingestion #Required; page title is displayed in search results. Include the brand. -description: This tutorial shows you how to perform Manifest ingestion #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a manifest-based file ingestion +description: This tutorial shows you how to perform Manifest ingestion ++++ Last updated 08/18/2022-+ #Customer intent: As a customer, I want to learn how to use manifest ingestion so that I can load manifest information into the Azure Data Manager for Energy Preview instance. |
energy-data-services | Tutorial Seismic Ddms Sdutil | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms-sdutil.md | Title: Microsoft Azure Data Manager for Energy Preview - Seismic store sdutil tutorial #Required; page title is displayed in search results. Include the brand. -description: Information on setting up and using sdutil, a command-line interface (CLI) tool that allows users to easily interact with seismic store. #Required; article description that is displayed in search results. ----+ Title: Microsoft Azure Data Manager for Energy Preview - Seismic store sdutil tutorial +description: Information on setting up and using sdutil, a command-line interface (CLI) tool that allows users to easily interact with seismic store. ++++ Last updated 09/09/2022-+ #Customer intent: As a developer, I want to learn how to use sdutil so that I can load data into the seismic store. |
energy-data-services | Tutorial Seismic Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms.md | Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: This tutorial shows you how to interact with Seismic DDMS Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. ----+ Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Azure Data Manager for Energy Preview +description: This tutorial shows you how to interact with Seismic DDMS Azure Data Manager for Energy Preview ++++ Last updated 3/16/2022-+ # Tutorial: Sample steps to interact with Seismic DDMS |
expressroute | Expressroute Howto Routing Portal Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md | |
hdinsight | Apache Hadoop Use Hive Curl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-curl.md | Preserve your credentials to avoid reentering them for each example. The cluste Edit the script below by replacing `PASSWORD` with your actual password. Then enter the command. ```bash-export password='PASSWORD' +export PASSWORD='PASSWORD' ``` **B. PowerShell** The actual casing of the cluster name may be different than you expect, dependin Edit the scripts below to replace `CLUSTERNAME` with your cluster name. Then enter the command. (The cluster name for the FQDN isn't case-sensitive.) ```bash-export clusterName=$(curl -u admin:$password -sS -G "https://CLUSTERNAME.azurehdinsight.net/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name') -echo $clusterName +export CLUSTER_NAME=$(curl -u admin:$PASSWORD -sS -G "https://CLUSTERNAME.azurehdinsight.net/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name') +echo $CLUSTER_NAME ``` ```powershell $clusterName 1. To verify that you can connect to your HDInsight cluster, use one of the following commands: ```bash- curl -u admin:$password -G https://$clusterName.azurehdinsight.net/templeton/v1/status + curl -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/templeton/v1/status ``` ```powershell $clusterName 1. The beginning of the URL, `https://$CLUSTERNAME.azurehdinsight.net/templeton/v1`, is the same for all requests. The path, `/status`, indicates that the request is to return a status of WebHCat (also known as Templeton) for the server. You can also request the version of Hive by using the following command: ```bash- curl -u admin:$password -G https://$clusterName.azurehdinsight.net/templeton/v1/version/hive + curl -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/templeton/v1/version/hive ``` ```powershell $clusterName 1. Use the following to create a table named **log4jLogs**: ```bash- jobid=$(curl -s -u admin:$password -d user.name=admin -d execute="DROP+TABLE+log4jLogs;CREATE+EXTERNAL+TABLE+log4jLogs(t1+string,t2+string,t3+string,t4+string,t5+string,t6+string,t7+string)+ROW+FORMAT+DELIMITED+FIELDS+TERMINATED+BY+' '+STORED+AS+TEXTFILE+LOCATION+'/example/data/';SELECT+t4+AS+sev,COUNT(*)+AS+count+FROM+log4jLogs+WHERE+t4+=+'[ERROR]'+AND+INPUT__FILE__NAME+LIKE+'%25.log'+GROUP+BY+t4;" -d statusdir="/example/rest" https://$clusterName.azurehdinsight.net/templeton/v1/hive | jq -r .id) - echo $jobid + JOB_ID=$(curl -s -u admin:$PASSWORD -d user.name=admin -d execute="DROP+TABLE+log4jLogs;CREATE+EXTERNAL+TABLE+log4jLogs(t1+string,t2+string,t3+string,t4+string,t5+string,t6+string,t7+string)+ROW+FORMAT+DELIMITED+FIELDS+TERMINATED+BY+' '+STORED+AS+TEXTFILE+LOCATION+'/example/data/';SELECT+t4+AS+sev,COUNT(*)+AS+count+FROM+log4jLogs+WHERE+t4+=+'[ERROR]'+AND+INPUT__FILE__NAME+LIKE+'%25.log'+GROUP+BY+t4;" -d statusdir="/example/rest" https://$CLUSTER_NAME.azurehdinsight.net/templeton/v1/hive | jq -r .id) + echo $JOB_ID ``` ```powershell $clusterName 1. To check the status of the job, use the following command: ```bash- curl -u admin:$password -d user.name=admin -G https://$clusterName.azurehdinsight.net/templeton/v1/jobs/$jobid | jq .status.state + curl -u admin:$PASSWORD -d user.name=admin -G https://$CLUSTER_NAME.azurehdinsight.net/templeton/v1/jobs/$jobid | jq .status.state ``` ```powershell |
hdinsight | Apache Hadoop Use Sqoop Mac Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md | Learn how to use Apache Sqoop to import and export between an Apache Hadoop clus 1. For ease of use, set variables. Replace `PASSWORD`, `MYSQLSERVER`, and `MYDATABASE` with the relevant values, and then enter the commands below: ```bash- export password='PASSWORD' - export sqlserver="MYSQLSERVER" - export database="MYDATABASE" + export PASSWORD='PASSWORD' + export SQL_SERVER="MYSQLSERVER" + export DATABASE="MYDATABASE" - export serverConnect="jdbc:sqlserver://$sqlserver.database.windows.net:1433;user=sqluser;password=$password" - export serverDbConnect="jdbc:sqlserver://$sqlserver.database.windows.net:1433;user=sqluser;password=$password;database=$database" + export SERVER_CONNECT="jdbc:sqlserver://$SQL_SERVER.database.windows.net:1433;user=sqluser;password=$PASSWORD" + export SERVER_DB_CONNECT="jdbc:sqlserver://$SQL_SERVER.database.windows.net:1433;user=sqluser;password=$PASSWORD;database=$DABATASE" ``` ## Sqoop export From Hive to SQL. 1. To verify that Sqoop can see your database, enter the command below in your open SSH connection. This command returns a list of databases. ```bash- sqoop list-databases --connect $serverConnect + sqoop list-databases --connect $SERVER_CONNECT ``` 1. Enter the following command to see a list of tables for the specified database: ```bash- sqoop list-tables --connect $serverDbConnect + sqoop list-tables --connect $SERVER_DB_CONNECT ``` 1. To export data from the Hive `hivesampletable` table to the `mobiledata` table in your database, enter the command below in your open SSH connection: ```bash- sqoop export --connect $serverDbConnect \ + sqoop export --connect $SERVER_DB_CONNECT \ -table mobiledata \ --hcatalog-table hivesampletable ``` From Hive to SQL. 1. To verify that data was exported, use the following queries from your SSH connection to view the exported data: ```bash- sqoop eval --connect $serverDbConnect \ + sqoop eval --connect $SERVER_DB_CONNECT \ --query "SELECT COUNT(*) from dbo.mobiledata WITH (NOLOCK)" - sqoop eval --connect $serverDbConnect \ + sqoop eval --connect $SERVER_DB_CONNECT \ --query "SELECT TOP(10) * from dbo.mobiledata WITH (NOLOCK)" ``` From SQL to Azure storage. 1. Enter the command below in your open SSH connection to import data from the `mobiledata` table in SQL, to the `wasbs:///tutorials/usesqoop/importeddata` directory on HDInsight. The fields in the data are separated by a tab character, and the lines are terminated by a new-line character. ```bash- sqoop import --connect $serverDbConnect \ + sqoop import --connect $SERVER_DB_CONNECT \ --table mobiledata \ --target-dir 'wasb:///tutorials/usesqoop/importeddata' \ --fields-terminated-by '\t' \ From SQL to Azure storage. 1. Alternatively, you can also specify a Hive table: ```bash- sqoop import --connect $serverDbConnect \ + sqoop import --connect $SERVER_DB_CONNECT \ --table mobiledata \ --target-dir 'wasb:///tutorials/usesqoop/importeddata2' \ --fields-terminated-by '\t' \ |
hdinsight | Apache Hbase Tutorial Get Started Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md | The HBase REST API is secured via [basic authentication](https://en.wikipedia.or 1. Set environment variable for ease of use. Edit the commands below by replacing `MYPASSWORD` with the cluster login password. Replace `MYCLUSTERNAME` with the name of your HBase cluster. Then enter the commands. ```bash- export password='MYPASSWORD' - export clustername=MYCLUSTERNAME + export PASSWORD='MYPASSWORD' + export CLUSTER_NAME=MYCLUSTERNAME ``` 1. Use the following command to list the existing HBase tables: ```bash- curl -u admin:$password \ - -G https://$clustername.azurehdinsight.net/hbaserest/ + curl -u admin:$PASSWORD \ + -G https://$CLUSTER_NAME.azurehdinsight.net/hbaserest/ ``` 1. Use the following command to create a new HBase table with two-column families: ```bash- curl -u admin:$password \ - -X PUT "https://$clustername.azurehdinsight.net/hbaserest/Contacts1/schema" \ + curl -u admin:$PASSWORD \ + -X PUT "https://$CLUSTER_NAME.azurehdinsight.net/hbaserest/Contacts1/schema" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -d "{\"@name\":\"Contact1\",\"ColumnSchema\":[{\"name\":\"Personal\"},{\"name\":\"Office\"}]}" \ The HBase REST API is secured via [basic authentication](https://en.wikipedia.or 1. Use the following command to insert some data: ```bash- curl -u admin:$password \ - -X PUT "https://$clustername.azurehdinsight.net/hbaserest/Contacts1/false-row-key" \ + curl -u admin:$PASSWORD \ + -X PUT "https://$CLUSTER_NAME.azurehdinsight.net/hbaserest/Contacts1/false-row-key" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -d "{\"Row\":[{\"key\":\"MTAwMA==\",\"Cell\": [{\"column\":\"UGVyc29uYWw6TmFtZQ==\", \"$\":\"Sm9obiBEb2xl\"}]}]}" \ The HBase REST API is secured via [basic authentication](https://en.wikipedia.or 1. Use the following command to get a row: ```bash- curl -u admin:$password \ - GET "https://$clustername.azurehdinsight.net/hbaserest/Contacts1/1000" \ + curl -u admin:$PASSWORD \ + GET "https://$CLUSTER_NAME.azurehdinsight.net/hbaserest/Contacts1/1000" \ -H "Accept: application/json" \ -v ``` |
hdinsight | Hdinsight Authorize Users To Ambari | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-authorize-users-to-ambari.md | Write-Output $zookeeperHosts Edit the variables below by replacing `CLUSTERNAME`, `ADMINPASSWORD`, `NEWUSER`, and `USERPASSWORD` with the appropriate values. The script is designed to be executed with bash. Slight modifications would be needed for a Windows command prompt. ```bash-export clusterName="CLUSTERNAME" -export adminPassword='ADMINPASSWORD' -export user="NEWUSER" -export userPassword='USERPASSWORD' +export CLUSTER_NAME="CLUSTERNAME" +export ADMIN_PASSWORD='ADMINPASSWORD' +export USER="NEWUSER" +export USER_PASSWORD='USERPASSWORD' # create user-curl -k -u admin:$adminPassword -H "X-Requested-By: ambari" -X POST \ --d "{\"Users/user_name\":\"$user\",\"Users/password\":\"$userPassword\",\"Users/active\":\"true\",\"Users/admin\":\"false\"}" \-https://$clusterName.azurehdinsight.net/api/v1/users +curl -k -u admin:$ADMIN_PASSWORD -H "X-Requested-By: ambari" -X POST \ +-d "{\"Users/user_name\":\"$USER\",\"Users/password\":\"$USER_PASSWORD\",\"Users/active\":\"true\",\"Users/admin\":\"false\"}" \ +https://$CLUSTER_NAME.azurehdinsight.net/api/v1/users -echo "user created: $user" +echo "user created: $USER" # grant permissions-curl -k -u admin:$adminPassword -H "X-Requested-By: ambari" -X POST \ --d '[{"PrivilegeInfo":{"permission_name":"CLUSTER.USER","principal_name":"'$user'","principal_type":"USER"}}]' \-https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/privileges +curl -k -u admin:$ADMIN_PASSWORD -H "X-Requested-By: ambari" -X POST \ +-d '[{"PrivilegeInfo":{"permission_name":"CLUSTER.USER","principal_name":"'$USER'","principal_type":"USER"}}]' \ +https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/privileges echo "Privilege is granted" echo "Pausing for 100 seconds" sleep 10s # perform query using new user account-curl -k -u $user:$userPassword -H "X-Requested-By: ambari" \ --X GET "https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/ZOOKEEPER/components/ZOOKEEPER_SERVER"+curl -k -u $USER:$USER_PASSWORD -H "X-Requested-By: ambari" \ +-X GET "https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER" ``` ## Grant permissions to Apache Hive views |
hdinsight | Hdinsight Hadoop Oms Log Analytics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md | description: Learn how to use Azure Monitor logs to monitor jobs running in an H Previously updated : 09/02/2022 Last updated : 04/14/2023 # Use Azure Monitor logs to monitor HDInsight clusters Available HDInsight workbooks: Screenshot of Spark Workbook :::image type="content" source="./media/hdinsight-hadoop-oms-log-analytics-tutorial/hdinsight-spark-workbook.png" alt-text="Spark workbook screenshot"::: -## Use at-scale Insights to monitor multiple clusters --You can log into Azure portal and select Monitoring. In the **Insights** section, you can select **Insights Hub**. Then you can find HDInsight clusters. --In this view, you can monitor multiple HDInsight clusters in one place. - :::image type="content" source="./media/hdinsight-hadoop-oms-log-analytics-tutorial/hdinsight-monitor-insights.png" alt-text="Cluster monitor insights screenshot"::: --You can select the subscription and the HDInsight clusters you want to monitor. --You can see the detail cluster list in each section. --In the **Overview** tab under **Monitored Clusters**, you can see cluster type, critical Alerts, and resource utilizations. - :::image type="content" source="./media/hdinsight-hadoop-oms-log-analytics-tutorial/hdinsight-cluster-alerts.png" alt-text="Cluster monitor alerts screenshot"::: --Also you can see the clusters in each workload type, including Spark, HBase, Hive, and Kafka. --The high-level metrics of each workload type will be presented, including how many active node managers, how many running applications, etc. - ## Configuring performance counters HDInsight support cluster auditing with Azure Monitor logs, by importing the fol * `log_ambari_audit_CL` - this table provides audit logs from Ambari. * `log_ranger_audti_CL` - this table provides audit logs from Apache Ranger on ESP clusters. - #### [Classic Azure Monitor experience](#tab/previous) ## Prerequisites For management solution instructions, see [Management solutions in Azure](/previ Because the cluster is a brand new cluster, the report doesn't show any activities. -## Configuring performance counters --Azure monitor supports collecting and analyzing performance metrics for the nodes in your cluster. For more information, see [Linux performance data sources in Azure Monitor](../azure-monitor/agents/data-sources-performance-counters.md#linux-performance-counters). - ## Cluster auditing HDInsight support cluster auditing with Azure Monitor logs, by importing the following types of logs: |
hdinsight | Hdinsight Sales Insights Etl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sales-insights-etl.md | If you don't have an Azure subscription, create a [free account](https://azure.m 1. Set variable for resource group. Replace `RESOURCE_GROUP_NAME` with the name of an existing or new resource group, then enter the command: ```bash- resourceGroup="RESOURCE_GROUP_NAME" + RESOURCE_GROUP="RESOURCE_GROUP_NAME" ``` 1. Execute the script. Replace `LOCATION` with a desired value, then enter the command: ```bash- ./scripts/resources.sh $resourceGroup LOCATION + ./scripts/resources.sh $RESOURCE_GROUP LOCATION ``` If you're not sure which region to specify, you can retrieve a list of supported regions for your subscription with the [az account list-locations](/cli/azure/account#az-account-list-locations) command. The default password for SSH access to the clusters is `Thisisapassword1`. If yo 1. To view the names of the clusters, enter the following command: ```bash- sparkClusterName=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.sparkClusterName.value') - llapClusterName=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.llapClusterName.value') + SPARK_CLUSTER_NAME=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.sparkClusterName.value') + LLAP_CLUSTER_NAME=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.llapClusterName.value') - echo "Spark Cluster" $sparkClusterName - echo "LLAP cluster" $llapClusterName + echo "Spark Cluster" $SPARK_CLUSTER_NAME + echo "LLAP cluster" $LLAP_CLUSTER_NAME ``` 1. To view the Azure storage account and access key, enter the following command: ```azurecli- blobStorageName=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.blobStorageName.value') + BLOB_STORAGE_NAME=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.blobStorageName.value') blobKey=$(az storage account keys list \- --account-name $blobStorageName \ - --resource-group $resourceGroup \ + --account-name $BLOB_STORAGE_NAME \ + --resource-group $RESOURCE_GROUP \ --query [0].value -o tsv) - echo $blobStorageName - echo $blobKey + echo $BLOB_STORAGE_NAME + echo $BLOB_KEY ``` 1. To view the Data Lake Storage Gen2 account and access key, enter the following command: ```azurecli- ADLSGen2StorageName=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.adlsGen2StorageName.value') + ADLSGEN2STORAGENAME=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.adlsGen2StorageName.value') - adlsKey=$(az storage account keys list \ - --account-name $ADLSGen2StorageName \ - --resource-group $resourceGroup \ + ADLSKEY=$(az storage account keys list \ + --account-name $ADLSGEN2STORAGENAME \ + --resource-group $RESOURCE_GROUP \ --query [0].value -o tsv) - echo $ADLSGen2StorageName - echo $adlsKey + echo $ADLSGEN2STORAGENAME + echo $ADLSKEY ``` ### Create a data factory This data factory will have one pipeline with two activities: To set up your Azure Data Factory pipeline, execute the command below. You should still be at the `hdinsight-sales-insights-etl` directory. ```bash-blobStorageName=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.blobStorageName.value') -ADLSGen2StorageName=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.adlsGen2StorageName.value') +BLOB_STORAGE_NAME=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.blobStorageName.value') +ADLSGEN2STORAGENAME=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.adlsGen2StorageName.value') -./scripts/adf.sh $resourceGroup $ADLSGen2StorageName $blobStorageName +./scripts/adf.sh $RESOURCE_GROUP $ADLSGEN2STORAGENAME $BLOB_STORAGE_NAME ``` This script does the following things: For other ways to transform data by using HDInsight, see [this article on using 1. Copy the `query.hql` file to the LLAP cluster by using SCP. Enter the command: ```bash- llapClusterName=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.llapClusterName.value') - scp scripts/query.hql sshuser@$llapClusterName-ssh.azurehdinsight.net:/home/sshuser/ + LLAP_CLUSTER_NAME=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.llapClusterName.value') + scp scripts/query.hql sshuser@$LLAP_CLUSTER_NAME-ssh.azurehdinsight.net:/home/sshuser/ ``` Reminder: The default password is `Thisisapassword1`. For other ways to transform data by using HDInsight, see [this article on using 1. Use SSH to access the LLAP cluster. Enter the command: ```bash- ssh sshuser@$llapClusterName-ssh.azurehdinsight.net + ssh sshuser@$LLAP_CLUSTER_NAME-ssh.azurehdinsight.net ``` 1. Use the following command to run the script: If you're not going to continue to use this application, delete all resources by 1. To remove the resource group, enter the command: ```azurecli- az group delete -n $resourceGroup + az group delete -n $RESOURCE_GROUP ``` 1. To remove the service principal, enter the commands: ```azurecli- servicePrincipal=$(cat serviceprincipal.json | jq -r '.name') - az ad sp delete --id $servicePrincipal + SERVICE_PRINCIPAL=$(cat serviceprincipal.json | jq -r '.name') + az ad sp delete --id $SERVICE_PRINCIPAL ``` ## Next steps |
hdinsight | Apache Kafka Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-get-started.md | In this section, you get the host information from the Apache Ambari REST API on 1. Set up password variable. Replace `PASSWORD` with the cluster login password, then enter the command: ```bash- export password='PASSWORD' + export PASSWORD='PASSWORD' ``` 1. Extract the correctly cased cluster name. The actual casing of the cluster name may be different than you expect, depending on how the cluster was created. This command will obtain the actual casing, and then store it in a variable. Enter the following command: ```bash- export clusterName=$(curl -u admin:$password -sS -G "http://headnodehost:8080/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name') + export CLUSTER_NAME=$(curl -u admin:$PASSWORD -sS -G "http://headnodehost:8080/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name') ``` > [!Note] In this section, you get the host information from the Apache Ambari REST API on 1. To set an environment variable with Zookeeper host information, use the command below. The command retrieves all Zookeeper hosts, then returns only the first two entries. This is because you want some redundancy in case one host is unreachable. ```bash- export KAFKAZKHOSTS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2); + export KAFKAZKHOSTS=$(curl -sS -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2); ``` > [!Note] In this section, you get the host information from the Apache Ambari REST API on 1. To set an environment variable with Apache Kafka broker host information, use the following command: ```bash- export KAFKABROKERS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2); + export KAFKABROKERS=$(curl -sS -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2); ``` > [!Note] |
hdinsight | Apache Kafka Producer Consumer Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md | If you would like to skip this step, prebuilt jars can be downloaded from the `P ```bash sudo apt -y install jq- export clusterName='<clustername>' - export password='<password>' - export KAFKABROKERS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2); + export CLUSTER_NAME='<clustername>' + export PASSWORD='<password>' + export KAFKABROKERS=$(curl -sS -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2); ``` > [!Note] |
healthcare-apis | Device Messages Through Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md | For enhanced workflows and ease of use, you can use the MedTech service to recei :::image type="content" source="media\device-messages-through-iot-hub\data-flow-diagram.png" border="false" alt-text="Diagram of the IoT device message flow through an IoT hub and event hub, and then into the MedTech service." lightbox="media\device-messages-through-iot-hub\data-flow-diagram.png"::: > [!TIP]-> To learn how the MedTech service transforms and persists device message data into the FHIR service as FHIR Observations, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md). +> To learn how the MedTech service transforms and persists device message data into the FHIR service as FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). In this tutorial, you learn how to: |
healthcare-apis | Frequently Asked Questions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md | The MedTech service is available in these Azure regions: [Products available by ### Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service? -No. The MedTech service currently only supports the Azure Health Data Services FHIR service for the persistence of transformed device message data. The open-source version of the MedTech service supports the use of different FHIR services. +No. The MedTech service currently only supports the Azure Health Data Services FHIR service for the persistence of transformed device data. The open-source version of the MedTech service supports the use of different FHIR services. To learn more about the MedTech service open-source projects, see [Open-source projects](git-projects.md). The MedTech service supports the [HL7 FHIR® R4](https://www.hl7.org/impleme ### Why do I have to provide device and FHIR destination mappings to the MedTech service? -The MedTech service requires device and FHIR destination mappings to perform normalization and transformation processes on device message data. To learn how the MedTech service transforms device message data into [FHIR Observations](https://www.hl7.org/fhir/observation.html), see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md). +The MedTech service requires device and FHIR destination mappings to perform normalization and transformation processes on device message data. To learn how the MedTech service transforms device data into [FHIR Observations](https://www.hl7.org/fhir/observation.html), see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). ### Is JsonPathContent still supported by the MedTech service device mapping? Yes. JsonPathContent can be used as a template type within [CollectionContent](overview-of-device-mapping.md#collectioncontent). It's recommended that [CalculatedContent](how-to-use-calculatedcontent-mappings.md) is used as it supports all of the features of JsonPathContent with extra support for more advanced features. -### How long does it take for device message data to show up in the FHIR service? +### How long does it take for device data to show up in the FHIR service? -The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observation.html) created during the transformation stage and provides near real-time processing. However, this buffer can potentially delay the persistence of FHIR Observations to the FHIR service up to ~five minutes. To learn how the MedTech service transforms device message data into FHIR Observations, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md). +The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observation.html) created during the transformation stage and provides near real-time processing. However, this buffer can potentially delay the persistence of FHIR Observations to the FHIR service up to ~five minutes. To learn how the MedTech service transforms device data into FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). ### Why are the device messages added to the event hub not showing up as FHIR Observations in the FHIR service? |
healthcare-apis | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md | For more information about assigning roles to the FHIR services, see [Configure For more information about application roles, see [Authentication and Authorization for Azure Health Data Services](../authentication-authorization.md). -## Step 5: Send the data for processing +## Step 5: Send the device data for processing -When the MedTech service is deployed and connected to the Event Hubs and FHIR services, it's ready to process data from a device and translate it into a FHIR service Observation. There are three parts of the sending process. +When the MedTech service is deployed and connected to the Event Hubs and FHIR services, it's ready to process device data and transform it into FHIR Observations. There are three parts of the sending process. -### Data sent from Device to Event Hubs +### Device data sent to Event Hubs -The data is sent to an Event Hubs instance so that it can wait until the MedTech service is ready to receive it. The data transfer needs to be asynchronous because it's sent over the Internet and delivery times can't be precisely measured. Normally the data won't sit on an event hub longer than 24 hours. +The device data is sent to an Event Hubs instance so that it can wait until the MedTech service is ready to receive it. The device data transfer needs to be asynchronous because it's sent over the Internet and delivery times can't be precisely measured. Normally the data won't sit on an event hub longer than 24 hours. For more information about Event Hubs, see [Event Hubs](../../event-hubs/event-hubs-about.md). For more information on Event Hubs data retention, see [Event Hubs quotas](../../event-hubs/event-hubs-quotas.md) -### Data Sent from Event Hubs to the MedTech service +### Device data sent from Event Hubs to the MedTech service -MedTech requests the data from the Event Hubs instance and the data is sent from the event hub to the MedTech service. This procedure is called ingestion. +MedTech requests the device data from the Event Hubs instance and the device data is sent from the event hub to the MedTech service. This procedure is called ingestion. -### The MedTech service processes the data +### The MedTech service processes the device data -The MedTech service processes the data in five steps: +The MedTech service processes the device data in five steps: - Ingest - Normalize The MedTech service processes the data in five steps: If the processing was successful and you didn't get any error messages, your device data is now a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource. -For more information on the MedTech service device message data transformation, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md). +For more information on the MedTech service device data transformation, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). -## Step 6: Verify the processed data +## Step 6: Verify the processed device data -You can verify that the data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the data isn't mapped or if the mapping isn't authored properly, the data will be skipped. If there are any problems, check the [device mapping](overview-of-device-mapping.md) or the [FHIR destination mapping](how-to-configure-fhir-mappings.md). +You can verify that the device data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the device data isn't mapped or if the mapping isn't authored properly, the device data will be skipped. If there are any problems, check the [device mapping](overview-of-device-mapping.md) or the [FHIR destination mapping](how-to-configure-fhir-mappings.md). ### Metrics -You can verify that the data is correctly persisted into the FHIR service by using the [MedTech service metrics](how-to-configure-metrics.md) in the Azure portal. +You can verify that the device data is correctly persisted in the FHIR service by using the [MedTech service metrics](how-to-configure-metrics.md) in the Azure portal. ## Next steps |
healthcare-apis | How To Configure Fhir Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-fhir-mappings.md | - Title: How to configure FHIR destination mappings in the MedTech service - Azure Health Data Services -description: This article describes how to configure FHIR destination mappings in Azure Health Data Services MedTech service. ---- Previously updated : 1/12/2023----# How to configure FHIR destination mappings --This article describes how to configure the MedTech service using the Fast Healthcare Interoperability Resources (FHIR®) destination mappings. --Below is a conceptual example of what happens during the normalization and transformation process within the MedTech service: ---## FHIR destination mappings --Once the device content is extracted into a normalized model, the data is collected and grouped according to device identifier, measurement type, and time period. The output of this grouping is sent for conversion into a FHIR resource ([Observation](https://www.hl7.org/fhir/observation.html) currently). The FHIR destination mapping template controls how the data is mapped into a FHIR observation. Should an observation be created for a point in time or over a period of an hour? What codes should be added to the observation? Should the value be represented as [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) or a [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity)? These data types are all options the FHIR destination mappings -configuration controls. --> [!NOTE] -> Mappings are stored in an underlying blob storage and loaded from blob per compute execution. Once updated they should take effect immediately. --## FHIR destination mappings validations --The validation process validates the FHIR destination mappings before allowing them to be saved for use. These elements are required in the FHIR destination mappings templates. --**FHIR destination mappings** --|Element|Required| -|:|:-| -|TypeName|True| --> [!NOTE] -> This is the only required FHIR destination mapping element validated at this time. --### CodeValueFhirTemplate --The CodeValueFhirTemplate is currently the only template supported in FHIR destination mapping at this time. It allows you to define codes, the effective period, and the value of the observation. Multiple value types are supported: [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept), and [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity). Along with these configurable values, the identifier for the Observation resource and linking to the proper Device and Patient resources are handled automatically. --| Property | Description -| | -|**TypeName**| The type of measurement this template should bind to. There should be at least one Device mapping template that outputs this type. -|**PeriodInterval**|The period of time the observation created should represent. Supported values are 0 (an instance), 60 (an hour), 1440 (a day). -|**Category**|Any number of [CodeableConcepts](http://hl7.org/fhir/datatypes-definitions.html#codeableconcept) to classify the type of observation created. -|**Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created. -|**Codes[].Code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**Codes[].System**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**Codes[].Display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**Value**|The value to extract and represent in the observation. For more information, see [Value Type Templates](#value-type-templates). -|**Components**|*Optional:* One or more components to create on the observation. -|**Components[].Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the component. -|**Components[].Value**|The value to extract and represent in the component. For more information, see [Value Type Templates](#value-type-templates). --### Value type templates --Below are the currently supported value type templates: --#### SampledData --Represents the [SampledData](http://hl7.org/fhir/datatypes.html#SampledData) FHIR data type. Observation measurements are written to a value stream starting at a point in time and incrementing forward using the period defined. If no value is present, an `E` will be written into the data stream. If the period is such that two more values occupy the same position in the data stream, the latest value is used. The same logic is applied when an observation using the SampledData is updated. --| Property | Description -| | -|**DefaultPeriod**|The default period in milliseconds to use. -|**Unit**|The unit to set on the origin of the SampledData. --#### Quantity --Represents the [Quantity](http://hl7.org/fhir/datatypes.html#Quantity) FHIR data type. If more than one value is present in the grouping, only the first value is used. When new value arrives that maps to the same observation it will overwrite the old value. --| Property | Description -| | -|**Unit**| Unit representation. -|**Code**| Coded form of the unit. -|**System**| System that defines the coded unit form. --### CodeableConcept --Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConcept) FHIR data type. The actual value isn't used. --| Property | Description -| | -|**Text**|Plain text representation. -|**Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created. -|**Codes[].Code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**Codes[].System**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**Codes[].Display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). --### Examples --**Heart rate - SampledData** --```json -{ - "templateType": "CodeValueFhir", - "template": { - "codes": [ - { - "code": "8867-4", - "system": "http://loinc.org", - "display": "Heart rate" - } - ], - "periodInterval": 60, - "typeName": "heartrate", - "value": { - "defaultPeriod": 5000, - "unit": "count/min", - "valueName": "hr", - "valueType": "SampledData" - } - } -} -``` --**Steps - SampledData** --```json -{ - "templateType": "CodeValueFhir", - "template": { - "codes": [ - { - "code": "55423-8", - "system": "http://loinc.org", - "display": "Number of steps" - } - ], - "periodInterval": 60, - "typeName": "stepsCount", - "value": { - "defaultPeriod": 5000, - "unit": "", - "valueName": "steps", - "valueType": "SampledData" - } - } -} -``` --**Blood pressure - SampledData** --```json -{ - "templateType": "CodeValueFhir", - "template": { - "codes": [ - { - "code": "85354-9", - "display": "Blood pressure panel with all children optional", - "system": "http://loinc.org" - } - ], - "periodInterval": 60, - "typeName": "bloodpressure", - "components": [ - { - "codes": [ - { - "code": "8867-4", - "display": "Diastolic blood pressure", - "system": "http://loinc.org" - } - ], - "value": { - "defaultPeriod": 5000, - "unit": "mmHg", - "valueName": "diastolic", - "valueType": "SampledData" - } - }, - { - "codes": [ - { - "code": "8480-6", - "display": "Systolic blood pressure", - "system": "http://loinc.org" - } - ], - "value": { - "defaultPeriod": 5000, - "unit": "mmHg", - "valueName": "systolic", - "valueType": "SampledData" - } - } - ] - } -} -``` --**Blood pressure - Quantity** --```json -{ - "templateType": "CodeValueFhir", - "template": { - "codes": [ - { - "code": "85354-9", - "display": "Blood pressure panel with all children optional", - "system": "http://loinc.org" - } - ], - "periodInterval": 0, - "typeName": "bloodpressure", - "components": [ - { - "codes": [ - { - "code": "8867-4", - "display": "Diastolic blood pressure", - "system": "http://loinc.org" - } - ], - "value": { - "unit": "mmHg", - "valueName": "diastolic", - "valueType": "Quantity" - } - }, - { - "codes": [ - { - "code": "8480-6", - "display": "Systolic blood pressure", - "system": "http://loinc.org" - } - ], - "value": { - "unit": "mmHg", - "valueName": "systolic", - "valueType": "Quantity" - } - } - ] - } -} -``` --**Device removed - CodeableConcept** --```json -{ - "templateType": "CodeValueFhir", - "template": { - "codes": [ - { - "code": "deviceEvent", - "system": "https://www.mydevice.com/v1", - "display": "Device Event" - } - ], - "periodInterval": 0, - "typeName": "deviceRemoved", - "value": { - "text": "Device Removed", - "codes": [ - { - "code": "deviceRemoved", - "system": "https://www.mydevice.com/v1", - "display": "Device Removed" - } - ], - "valueName": "deviceRemoved", - "valueType": "CodeableConcept" - } - } -} -``` --> [!TIP] -> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing common MedTech service errors. --## Next steps --In this article, you learned how to configure FHIR destination mappings. --To learn about how to configure device mappings, see --> [!div class="nextstepaction"] -> [How to configure device mappings](how-to-configure-device-mappings.md) --FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | How To Configure Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md | Metric category|Metric name|Metric description| |--|--|--| |Availability|IotConnector Health Status|The overall health of the MedTech service.| |Errors|Total Error Count|The total number of errors.|-|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](overview-of-device-message-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.| -|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](overview-of-device-message-processing-stages.md#normalize) performs normalization on raw incoming messages.| -|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-message-processing-stages.md#persist) by the MedTech service.| -|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](overview-of-device-message-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.| -|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-message-processing-stages.md#transform) of the MedTech service.| +|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](overview-of-device-data-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.| +|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](overview-of-device-data-processing-stages.md#normalize) performs normalization on raw incoming messages.| +|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.| +|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](overview-of-device-data-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.| +|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-data-processing-stages.md#transform) of the MedTech service.| |Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.| |Traffic|Number of Normalized Messages|The number of normalized messages.| |
healthcare-apis | How To Use Calculatedcontent Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-mappings.md | +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. + This article describes how to use CalculatedContent mappings with MedTech service device mappings in Azure Health Data Services. ## Overview of CalculatedContent mappings |
healthcare-apis | How To Use Custom Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md | +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. + Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-configure-device-mappings.md) during the device message [normalization](understand-service.md#normalize) process. > [!TIP] |
healthcare-apis | How To Use Iotjsonpathcontent Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontent-mappings.md | + + Title: How to use IotJsonPathContent mappings in the MedTech service device mappings - Azure Health Data Services +description: This article describes how to use IotJsonPathContent mappings with the MedTech service device mappings. ++++ Last updated : 04/13/2023++++# How to use IotJsonPathContent mappings ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++This article describes how to use IoTJsonPathContent mappings with the MedTech service [device mappings](overview-of-device-mapping.md). ++## IotJsonPathContent ++The IotJsonPathContent is similar to the JsonPathContent except the `DeviceIdExpression` and `TimestampExpression` aren't required. ++The assumption, when using this template, is the device messages being evaluated were sent using the [Azure IoT Hub Device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) or [Export Data (legacy)](../../iot-central/core/howto-export-data-legacy.md) feature of [Azure IoT Central](../../iot-central/core/overview-iot-central.md). ++When you're using these SDKs, the device identity and the timestamp of the message are known. ++> [!IMPORTANT] +> Make sure that you're using a device identifier from Azure Iot Hub or Azure IoT Central that is registered as an identifier for a device resource on the destination FHIR service. ++If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContentTemplate, assuming that you're using custom properties in the message body for the device identity or measurement timestamp. ++> [!NOTE] +> When using `IotJsonPathContent`, the `TypeMatchExpression` should resolve to the entire message as a JToken. For more information, see the following examples: ++### Examples ++With each of these examples, you're provided with: + * A valid device message. + * An example of what the device message will look like after IoT hub receiving and processing. + * Conforming and valid MedTech service device mappings for normalizing the device message after IoT hub processing. + * An example of what the MedTech service device message will look like after normalization. ++> [!IMPORTANT] +> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties). ++> [!TIP] +> [Visual Studio Code with the Azure IoT Hub extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) is a recommended method for sending IoT device messages to your IoT Hub for testing and troubleshooting. ++**Heart rate** ++**A valid device message to send to your IoT hub.** ++```json ++{ΓÇ£heartRateΓÇ¥ : ΓÇ£78ΓÇ¥} ++``` ++**An example of what the device message will look like after being received and processed by the IoT hub.** ++> [!NOTE] +> The IoT Hub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`. +> +> `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-05-new-config.md#destination-properties). ++```json ++{ + "Body": { + "heartRate": "78" + }, + "Properties": { + "iothub-creation-time-utc" : "2021-02-01T22:46:01.8750000Z" + }, + "SystemProperties": { + "iothub-connection-device-id" : "device123" + } +} ++``` ++**Conforming and valid MedTech service device mappings for normalizing device data after IoT Hub processing.** ++```json ++{ + "templateType": "CollectionContent", + "template": [ + { + "templateType": "IotJsonPathContentTemplate", + "template": { + "typeName": "heartRate", + "typeMatchExpression": "$..[?(@Body.heartRate)]", + "patientIdExpression": "$.SystemProperties.iothub-connection-device-id", + "values": [ + { + "required": "true", + "valueExpression": "$.Body.heartRate", + "valueName": "hr" + } + ] + } + } + ] +} ++``` ++**An example of what the MedTech service device data will look like after the normalization process.** ++```json ++{ + "type": "heartRate", + "occurrenceTimeUtc": "2021-02-01T22:46:01.875Z", + "deviceId": "device123", + "properties": [ + { + "name": "hr", + "value": "78" + } + ] +} ++``` ++**Blood pressure** ++**A valid IoT device message to send to your IoT hub.** ++```json ++{ + "systolic": "123", + "diastolic": "87" +} ++``` ++**An example of what the device message will look like after being received and processed by the IoT hub.** ++> [!NOTE] +> The IoT hyub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`. +> +> `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-05-new-config.md#destination-properties). ++```json ++{ + "Body": { + "systolic": "123", + "diastolic" : "87" + }, + "Properties": { + "iothub-creation-time-utc" : "2021-02-01T22:46:01.8750000Z" + }, + "SystemProperties": { + "iothub-connection-device-id" : "device123" + } +} ++``` ++**Conforming and valid MedTech service device mappings for normalizing the device data after IoT hub processing.** ++```json ++{ + "templateType": "CollectionContent", + "template": [ + { + "templateType": "IotJsonPathContentTemplate", + "template": { + "typeName": "bloodpressure", + "typeMatchExpression": "$..[?(@Body.systolic && @Body.diastolic)]", + "patientIdExpression": "$.SystemProperties.iothub-connection-device-id", + "values": [ + { + "required": "true", + "valueExpression": "$.Body.systolic", + "valueName": "systolic" + }, + { + "required": "true", + "valueExpression": "$.Body.diastolic", + "valueName": "diastolic" + } + ] + } + } + ] +} ++``` ++**An example of what the MedTech service device data will look like after the normalization process.** ++```json ++{ + "type": "bloodpressure", + "occurrenceTimeUtc": "2021-02-01T22:46:01.875Z", + "deviceId": "device123", + "properties": [ + { + "name": "systolic", + "value": "123" + }, + { + "name": "diastolic", + "value": "87" + } + ] +} ++``` ++> [!TIP] +> The IotJsonPathContent device mapping examples provided in this article may be combined into a single MedTech service device mappings as shown. +> +> Additionally, the IotJasonPathContent can also be combined with with other template types such as [JsonPathContent mappings](how-to-use-jsonpath-content-mappings.md) to further expand your MedTech service device mapping. ++**Combined heart rate and blood pressure MedTech service device mapping example.** ++```json ++{ + "templateType": "CollectionContent", + "template": [ + { + "templateType": "IotJsonPathContent", + "template": { + "typeName": "heartRate", + "typeMatchExpression": "$..[?(@Body.heartRate)]", + "patientIdExpression": "$.SystemProperties.iothub-connection-device-id", + "values": [ + { + "required": "true", + "valueExpression": "$.Body.heartRate", + "valueName": "hr" + } + ] + } + }, + { + "templateType": "IotJsonPathContent", + "template": { + "typeName": "bloodpressure", + "typeMatchExpression": "$..[?(@Body.systolic && @Body.diastolic)]", + "patientIdExpression": "$.SystemProperties.iothub-connection-device-id", + "values": [ + { + "required": "true", + "valueExpression": "$.Body.systolic", + "valueName": "systolic" + }, + { + "required": "true", + "valueExpression": "$.Body.diastolic", + "valueName": "diastolic" + } + ] + } + } + ] +} ++``` ++> [!TIP] +> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing MedTech service errors. ++## Next steps ++In this article, you learned how to use IotJsonPathContent mappings with the MedTech service device mapping. ++To learn how to configure the MedTech service FHIR destination mapping, see ++> [!div class="nextstepaction"] +> [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | How To Use Mapping Debugger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md | -> To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device data processing stages](overview-of-device-message-processing-stages.md). +> To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). The following video presents an overview of the Mapping debugger: > |
healthcare-apis | How To Use Monitoring And Health Checks Tabs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-and-health-checks-tabs.md | Metric category|Metric name|Metric description| |--|--|--| |Availability|IotConnector Health Status|The overall health of the MedTech service.| |Errors|**Total Error Count**|The total number of errors.|-|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](overview-of-device-message-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.| -|Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](overview-of-device-message-processing-stages.md#normalize) performs normalization on raw incoming messages.| -|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-message-processing-stages.md#persist) by the MedTech service.| -|Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](overview-of-device-message-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.| -|Traffic|**Number of Measurements**|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-message-processing-stages.md#transform) of the MedTech service.| +|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](overview-of-device-data-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.| +|Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](overview-of-device-data-processing-stages.md#normalize) performs normalization on raw incoming messages.| +|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.| +|Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](overview-of-device-data-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.| +|Traffic|**Number of Measurements**|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-data-processing-stages.md#transform) of the MedTech service.| |Traffic|**Number of Message Groups**|The number of groups that have messages aggregated in the designated time window.| |Traffic|**Number of Normalized Messages**|The number of normalized messages.| |
healthcare-apis | Overview Of Device Data Processing Stages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-data-processing-stages.md | + + Title: Overview of the MedTech service device data processing stages - Azure Health Data Services +description: This article provides an overview of the MedTech service device data processing stages. The MedTech service ingests, normalizes, groups, transforms, and persists device message data in the FHIR service. +++++ Last updated : 04/14/2023++++# Overview of the MedTech service device data processing stages ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++This article provides an overview of the device data processing stages within the [MedTech service](overview.md). The MedTech service transforms device data into [FHIR Observations](https://www.hl7.org/fhir/observation.html) for persistence in the [FHIR service](../fhir/overview.md). ++The MedTech service device data processing follows these stages and in this order: ++* Ingest +* Normalize - Device mapping applied. +* Group - (Optional) +* Transform - FHIR destination mapping applied. +* Persist +++## Ingest +Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume device messages asynchronously, removing the need for devices to wait while device messages are processed. The MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) are used for secure access to the event hub. ++> [!NOTE] +> JSON is the only supported format at this time for device message data. ++> [!IMPORTANT] +> If you're going to allow access from multiple services to the event hub, it's required that each service has its own event hub consumer group. +> +> Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). +> +> Examples: +> +> - Two MedTech services accessing the same event hub. +> +> - A MedTech service and a storage writer application accessing the same event hub. ++## Normalize +Normalize is the next stage where device data is processed using the user-selected/user-created conforming and valid [device mapping](overview-of-device-mapping.md). This mapping process results in transforming device data into a normalized schema. The normalization process not only simplifies device data processing at later stages, but also provides the capability to project one device message into multiple normalized messages. For instance, a device could send multiple vital signs for body temperature, pulse rate, blood pressure, and respiration rate in a single device message. This device message would create four separate FHIR Observations. Each FHIR Observation would represent a different vital sign, with the device message projected into four different normalized messages. ++## Group - (Optional) +Group is the next *optional* stage where the normalized messages available from the MedTech service normalization stage are grouped using three different parameters: ++* Device identity +* Measurement type +* Time period ++Device identity and measurement type grouping are optional and enabled by the use of the [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. The SampledData measurement type provides a concise way to represent a time-based series of measurements from a device message into FHIR Observations. When you use the SampledData measurement type, measurements can be grouped into a single FHIR Observation that represents a 1-hour period or a 24-hour period. ++## Transform +Transform is the next stage where normalized messages are processed using the user-selected/user-created conforming and valid [FHIR destination mapping](how-to-configure-fhir-mappings.md). Normalized messages get transformed into FHIR Observations if a matching FHIR destination mapping has been authored. At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the device message. These resources are added as a reference to the FHIR Observation being created. ++> [!NOTE] +> All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients, it is advised you create a virtual device resource that is specific to the patient and send the virtual device identifier in the device message payload. The virtual device can be linked to the actual device resource as a parent. ++If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [**Resolution type**](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to **Lookup**, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to **Create**, the MedTech service creates minimal Device and Patient resources in the FHIR service. ++> [!NOTE] +> The **Resolution type** can also be adjusted post deployment of the MedTech service if a different **Resolution type** is later required. ++The MedTech service provides near real-time processing and also attempts to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after approximately five minutes. When there's fewer than 300 normalized messages to be processed, there may be a delay of approximately five minutes before FHIR Observations are created or updated in the FHIR service. ++> [!NOTE] +> When multiple device messages contain data for the same FHIR Observation, have the same timestamp, and are sent within the same device message batch (for example, within the five minute window or in groups of 300 normalized messages), only the data corresponding to the latest device message for that FHIR Observation is persisted. +> +> For example: +> +> Device message 1: +> ```json +> {┬á┬á┬á +> "patientid":┬á"testpatient1",┬á┬á┬á +> "deviceid":┬á"testdevice1", +> "systolic":┬á"129",┬á┬á┬á +> "diastolic":┬á"65",┬á┬á┬á +> "measurementdatetime":┬á"2022-02-15T04:00:00.000Z" +> }┬á +> ``` +> +> Device message 2: +> ```json +> {┬á┬á┬á +> "patientid":┬á"testpatient1",┬á┬á┬á +> "deviceid":┬á"testdevice1",┬á┬á┬á +> "systolic":┬á"113",┬á┬á┬á +> "diastolic":┬á"58",┬á┬á┬á +> "measurementdatetime":┬á"2022-02-15T04:00:00.000Z" +> } +> ``` +> +> Assuming these device messages were ingested within the same five minute window or in the same group of 300 normalized messages, and since the `measurementdatetime` is the same for both device messages (indicating these contain data for the same FHIR Observation), only device message 2 is persisted to represent the latest/most recent data. ++## Persist +Persist is the final stage where the FHIR Observations from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation is new, it's created in the FHIR service. If the FHIR Observation already existed, it gets updated in the FHIR service. The FHIR service uses the MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for secure access to the FHIR service. ++## Next steps ++In this article, you learned about the MedTech service device message processing and persistence in the FHIR service. ++To get an overview of the MedTech service device and FHIR destination mappings, see ++> [!div class="nextstepaction"] +> [Overview of the MedTech service device mapping](overview-of-device-mapping.md) ++> [!div class="nextstepaction"] +> [Overview of the MedTech service FHIR destination mapping](how-to-configure-fhir-mappings.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Overview Of Device Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-mapping.md | -The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The device mapping is the first type and controls mapping values in the device message data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR destination mapping](how-to-configure-fhir-mappings.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html). +The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The device mapping is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR destination mapping](how-to-configure-fhir-mappings.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html). > [!NOTE]-> The device and FHIR destination mappings are re-evaluated each time a message is processed. Any updates to either mapping will take effect immediately. +> The device and FHIR destination mappings are re-evaluated each time a device message is processed. Any updates to either mapping will take effect immediately. ## Device mapping basics The device mapping contains collections of expression templates used to extract device message data into an internal, normalized format for further evaluation. Each device message received is evaluated against **all** expression templates in the collection. This evaluation means that a single device message can be separated into multiple outbound messages that can be mapped to multiple FHIR Observations in the FHIR service. > [!TIP]-> For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md). +> For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). This diagram provides an illustration of what happens during the normalization stage within the MedTech service. You can use these template types within CollectionContent depending on your use and/or -- [IotJsonPathContent](how-to-use-iotjsonpathcontenttemplate-mappings.md) for device messages being routed through [Azure IoT Hub](/azure/iot-hub/iot-concepts-and-iot-hub) to your MedTech service event hub. IotJsonPathContent supports [JSONPath](https://goessner.net/articles/JsonPath/). +- [IotJsonPathContent](how-to-use-iotjsonpathcontent-mappings.md) for device messages being routed through [Azure IoT Hub](/azure/iot-hub/iot-concepts-and-iot-hub) to your MedTech service event hub. IotJsonPathContent supports [JSONPath](https://goessner.net/articles/JsonPath/). :::image type="content" source="media/overview-of-device-mapping/device-mapping-templates-diagram.png" alt-text="Diagram showing MedTech service device mapping templates architecture." lightbox="media/overview-of-device-mapping/device-mapping-templates-diagram.png"::: The resulting normalized message will look like this after the normalization sta When the MedTech service is processing the device message, the templates in the CollectionContent are used to evaluate the message. The `typeMatchExpression` is used to determine whether or not the template should be used to create a normalized message from the device message. If the `typeMatchExpression` evaluates to true, then the `deviceIdExpression`, `timestampExpression`, and `valueExpression` values are used to locate and extract the JSON values from the device message and create a normalized message. In this example, all expressions are written in JSONPath, however, it would be valid to write all the expressions in JMESPath. It's up to the template author to determine which expression language is most appropriate. > [!TIP]-> See [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md) for assistance fixing common MedTech service deployment errors. +> For assistance fixing common MedTech service deployment errors, see [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md). >-> See [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md) for assistance fixing MedTech service errors. +> For assistance fixing MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md). ## Next steps To learn how to use CalculatedContent with the MedTech service device mapping, s To learn how to use IotJsonPathContent with the MedTech service device mapping, see > [!div class="nextstepaction"] -> [How to use IotJsonPathContent with the MedTech service device mapping](how-to-use-iotjsonpathcontenttemplate-mappings.md) +> [How to use IotJsonPathContent with the MedTech service device mapping](how-to-use-iotjsonpathcontent-mappings.md) To learn how to use custom functions with the MedTech service device mapping, see |
healthcare-apis | Overview Of Fhir Destination Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-fhir-destination-mapping.md | + + Title: Overview of the MedTech service FHIR destination mapping - Azure Health Data Services +description: This article provides an overview of the MedTech service FHIR destination mapping. ++++ Last updated : 04/14/2023++++# Overview of the MedTech service FHIR destination mapping ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++This article provides an overview of the MedTech service FHIR destination mapping. ++The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The [device mapping](overview-of-device-mapping.md) is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The FHIR destination mapping is the second type and controls how the normalized data is mapped to [FHIR Observations](https://www.hl7.org/fhir/observation.html). ++> [!NOTE] +> The device and FHIR destination mappings are re-evaluated each time a device message is processed. Any updates to either mapping will take effect immediately. ++## FHIR destination mapping basics ++The FHIR destination mapping controls how the data extracted from a device message is mapped into a FHIR observation. ++- Should an observation be created for a point in time or over a period of an hour? +- What codes should be added to the observation? +- Should the value be represented as [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) or a [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity)? ++These data types are all options the FHIR destination mapping configuration controls. ++Once a device message is transformed into a normalized data model, the data is collected for transformation to a [FHIR Observation](https://www.hl7.org/fhir/observation.html). If the Observation type is [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), the data is grouped according to device identifier, measurement type, and time period (time period can be either 1 hour or 24 hours). The output of this grouping is sent for conversion into a single [FHIR Observation](https://www.hl7.org/fhir/observation.html) that represents the time period for that data type. For other Observation types ([Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept) and [string](https://www.hl7.org/fhir/datatypes.html#string)) data is not grouped, but instead each measurement is transformed into a single Observation representing a point in time. ++> [!TIP] +> For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md). ++This diagram provides an illustration of what happens during the transformation stage within the MedTech service. +++> [!NOTE] +> The FHIR Observation in this diagram is not the complete resource. See [Example](#example) in this overview for the entire FHIR Observation. ++## FHIR destination mapping validations ++The validation process validates the FHIR destination mapping before allowing them to be saved for use. These elements are required in the FHIR destination mapping. ++**FHIR destination mapping** ++|Element|Required| +|:|:-| +|typeName|True| ++> [!NOTE] +> The 'typeName' element is used to link a FHIR destination mapping template to one or more device mapping templates. Device mapping templates with the same 'typeName' element generate normalized data that will be evaluated with a FHIR destination mapping template that has the same 'typeName'. ++## CollectionFhir ++CollectionFhir is the root template type used by the MedTech service FHIR destination mapping. CollectionFhir is a list of all templates that are used during the transformation stage. You can define one or more templates within CollectionFhir, with each normalized message evaluated against all templates. ++### CodeValueFhir ++CodeValueFhir is currently the only template supported in FHIR destination mapping at this time. It allows you to define codes, the effective period, and the value of the observation. Multiple value types are supported: [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept), [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), and [String](https://www.hl7.org/fhir/datatypes.html#primitive). Along with these configurable values, the identifier for the Observation resource and linking to the proper Device and Patient resources are handled automatically. ++> [!NOTE] +> ++|Property|Description| +|:-|--| +|**typeName**| The type of measurement this template should bind to. There should be at least one Device mapping template that outputs this type. +|**periodInterval**|The period of time the observation created should represent. Supported values are 0 (an instance), 60 (an hour), 1440 (a day). Note: `periodInterval` is required when the Observation type is "SampledData" and is ignored for any other Observation types. +|**category**|Any number of [CodeableConcepts](http://hl7.org/fhir/datatypes-definitions.html#codeableconcept) to classify the type of observation created. +|**codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created. +|**codes[].code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). +|**codes[].system**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). +|**codes[].display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). +|**value**|The value to extract and represent in the observation. For more information, see [Value type codes](#value-type-codes). +|**components**|*Optional:* One or more components to create on the observation. +|**components[].codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the component. +|**components[].value**|The value to extract and represent in the component. For more information, see [Value type codes](#value-type-codes). +++### Value type codes ++The supported value type codes for the MedTech service FHIR destination mapping: ++### SampledData ++Represents the [SampledData](http://hl7.org/fhir/datatypes.html#SampledData) FHIR data type. Observation measurements are written to a value stream starting at a point in time and incrementing forward using the period defined. If no value is present, an `E` is written into the data stream. If the period is such that two more values occupy the same position in the data stream, the latest value is used. The same logic is applied when an observation using the SampledData is updated. ++| Property | Description +| | +|**DefaultPeriod**|The default period in milliseconds to use. +|**Unit**|The unit to set on the origin of the SampledData. ++### Quantity ++Represents the [Quantity](http://hl7.org/fhir/datatypes.html#Quantity) FHIR data type. This type creates a single, point in time, Observation. If a new value arrives that contains the same device identifier, measurement type, and timestamp, the previous Observation is updated to the new value. ++| Property | Description +| | +|**Unit**| Unit representation. +|**Code**| Coded form of the unit. +|**System**| System that defines the coded unit form. ++### CodeableConcept ++Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConcept) FHIR data type. The value in the normalized data model isn't used, and instead when this type of data is received, an Observation is created with a specific code representing that an observation was recorded at a specific point in time. ++| Property | Description +| | +|**Text**|Plain text representation. +|**Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created. +|**Codes[].Code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). +|**Codes[].System**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). +|**Codes[].Display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). ++### String ++Represents the [string](https://www.hl7.org/fhir/datatypes.html#string) FHIR data type. This type creates a single, point in time, Observation. If new value arrives that contains the same device identifier, measurement type, and timestamp, the previous Observation is updated to the new value. ++### Example ++> [!TIP] +> You can use the MedTech service [Mapping debugger](how-to-use-mapping-debugger.md) for assistance creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations. ++> [!NOTE] +> This example and normalized message is a continuation from [Overview of the MedTech service device mapping](overview-of-device-mapping.md#example). ++In this example, we're using a normalized message capturing `heartRate` data: ++```json +[ + { + "type": "heartrate", + "occurrenceTimeUtc": "2023-03-13T22:46:01.875Z", + "deviceId": "device01", + "properties": [ + { + "name": "hr", + "value": "78" + } + ] + } +] +``` ++We're using this FHIR destination mapping for the transformation stage: ++```json +{ + "templateType": "CollectionFhir", + "template": [ + { + "templateType": "CodeValueFhir", + "template": { + "codes": [ + { + "code": "8867-4", + "system": "http://loinc.org", + "display": "Heart rate" + } + ], + "typeName": "heartrate", + "value": { + "system": "http://unitsofmeasure.org", + "code": "count/min", + "unit": "count/min", + "valueName": "hr", + "valueType": "Quantity" + } + } + } + ] +} ++``` ++The resulting FHIR Observation will look like this after the transformation stage: ++```json +[ + { + "code": { + "coding": [ + { + "system": { + "value": "http://loinc.org" + }, + "code": { + "value": "8867-4" + }, + "display": { + "value": "Heart rate" + } + } + ], + "text": { + "value": "heartrate" + } + }, + "effective": { + "start": { + "value": "2023-03-13T22:46:01.8750000Z" + }, + "end": { + "value": "2023-03-13T22:46:01.8750000Z" + } + }, + "issued": { + "value": "2023-04-05T21:02:59.1650841+00:00" + }, + "value": { + "value": { + "value": 78 + }, + "unit": { + "value": "count/min" + }, + "system": { + "value": "http://unitsofmeasure.org" + }, + "code": { + "value": "count/min" + } + } + } +] +``` ++> [!TIP] +> For assistance fixing common MedTech service deployment errors, see [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md). +> +> For assistance fixing MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md). ++## Next steps ++In this article, you've been provided an overview of the MedTech service FHIR destination mapping. ++To get an overview of the MedTech service device mapping, see ++> [!div class="nextstepaction"] +> [Overview of the MedTech service device mapping](overview-of-device-mapping.md) ++To learn how to use CalculatedContent with the MedTech service device mapping, see ++> [!div class="nextstepaction"] +> [How to use CalculatedContent with the MedTech service device mapping](how-to-use-calculatedcontent-mappings.md) ++To learn how to use IotJsonPathContent with the MedTech service device mapping, see ++> [!div class="nextstepaction"] +> [How to use IotJsonPathContent with the MedTech service device mapping](how-to-use-iotjsonpathcontenttemplate-mappings.md) ++To learn how to use custom functions with the MedTech service device mapping, see ++> [!div class="nextstepaction"] +> [How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md | The MedTech service processes device data in five stages: 4. **Transform** - When the normalized data is grouped, it's transformed through the FHIR destination mapping and is ready to become FHIR Observations. -5. **Persist** - After the transformation is done, the new data is sent to FHIR service and persisted as FHIR Observations. +5. **Persist** - After the transformation is done, the new data is sent to the FHIR service and persisted as FHIR Observations. ## Key features of the MedTech service The MedTech service delivers your device data into FHIR service, ensuring that y ### Configurable -The MedTech service can be customized and configured by using [device](how-to-configure-device-mappings.md) and [FHIR destination](how-to-configure-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR Observations. +The MedTech service can be customized and configured by using [device](overview-of-device-mapping.md) and [FHIR destination](how-to-configure-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR Observations. Useful options could include: In this article, you learned about the MedTech service and its capabilities. To learn about how the MedTech service processes device data, see > [!div class="nextstepaction"]-> [Overview of the MedTech service device data processing stages](overview-of-device-message-processing-stages.md) +> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) To learn about the different deployment methods for the MedTech service, see |
healthcare-apis | Troubleshoot Errors Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md | Here's a list of errors that can be found in the Azure Resource Manager (ARM) AP **Fix**: Set the `location` property of the FHIR destination in your ARM template to the same value as the parent MedTech service's `location` property. > [!NOTE]-> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message, [device mapping, and FHIR destination mapping](how-to-create-mappings-copies.md) to your request to better help with issue determination. +> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message, [device and FHIR destination mappings](how-to-use-mapping-debugger.md#overview-of-the-mapping-debugger) to your request to better help with issue determination. ## Next steps |
healthcare-apis | Troubleshoot Errors Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-logs.md | This property represents the operation being performed by the MedTech service wh |FHIRConversion|The data flow stage where the grouped-normalized data is transformed into an Observation resource.| > [!NOTE]-> To learn about the MedTech service device message data transformation, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md). +> To learn about the MedTech service device message data transformation, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md). ## MedTech service health check exceptions and fixes The expression and line with the error are specified in the error message. **Fix**: On the Azure portal, go to your FHIR service, and assign the **FHIR Data Writer** role to your MedTech service (see [step-by-step instructions](deploy-new-deploy.md#grant-access-to-the-fhir-service)). > [!NOTE]-> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket and attach copies of your device message, [device mapping, and FHIR destination mapping](how-to-create-mappings-copies.md) to your request to better help with issue determination. +> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message, [device and FHIR destination mappings](how-to-use-mapping-debugger.md#overview-of-the-mapping-debugger) to your request to better help with issue determination. ## Next steps |
iot-develop | Concepts Using C Sdk And Embedded C Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-using-c-sdk-and-embedded-c-sdk.md | description: Helps developers decide which C-based Azure IoT device SDK to use f -+ Last updated 09/16/2022-+ #Customer intent: As a device developer, I want to understand when to use the Azure IoT C SDK or the Embedded C SDK to optimize device and application performance. |
iot-hub-device-update | Connected Cache Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-configure.md | - Title: Configure Microsoft Connected Cache for Device Update for Azure IoT Hub- -description: Overview of Microsoft Connected Cache for Device Update for Azure IoT Hub -- Previously updated : 08/19/2022-----# Configure Microsoft Connected Cache for Device Update for IoT Hub --> [!NOTE] -> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available. --Microsoft Connected Cache (MCC) is deployed to Azure IoT Edge gateways as an IoT Edge module. Like other IoT Edge modules, environment variables and container create options are used to configure MCC modules. This article defines the environment variables and container create options that are required for a customer to successfully deploy the Microsoft Connected Cache module for use by Device Update for IoT Hub. --## Module deployment details --There's no naming requirement for the Microsoft Connected Cache module since no other module or service interactions rely on the name of the MCC module for communication. Additionally, the parent-child relationship of the Microsoft Connected Cache servers isn't dependent on this module name, but rather the FQDN or IP address of the IoT Edge gateway. --Microsoft Connected Cache module environment variables are used to pass basic module identity information and functional module settings to the container. --| Variable name | Value format | Description | -|--|--|--|--| -| CUSTOMER_ID | Azure subscription ID GUID | Required <br><br> This is the customer's key, which provides secure authentication of the cache node to Delivery Optimization services. | -| CACHE_NODE_ID | Cache node ID GUID | Required <br><br> Uniquely identifies the MCC node to Delivery Optimization services. | -| CUSTOMER_KEY | Customer Key GUID | Required <br><br> This is the customer's key, which provides secure authentication of the cache node to Delivery Optimization services. | -| STORAGE_*N*_SIZE_GB (Where *N* is the cache drive) | Integer | Required <br><br> Specify up to nine drives to cache content and specify the maximum space in gigabytes to allocate for content on each cache drive. The number of the drive must match the cache drive binding values specified in the container create option MicrosoftConnectedCache*N* value.<br><br>Examples:<br>STORAGE_1_SIZE_GB = 150<br>STORAGE_2_SIZE_GB = 50<br><br>Minimum size of the cache is 10 GB. | -| UPSTREAM_HOST | FQDN/IP | Optional <br><br> This value can specify an upstream MCC node that acts as a proxy if the Connected Cache node is disconnected from the internet. This setting is used to support the nested IoT scenario.<br><br>**Note:** MCC listens on http default port 80. | -| UPSTREAM_PROXY | FQDN/IP:PORT | Optional <br><br> The outbound internet proxy. This could also be the OT DMZ proxy of an ISA 95 network. | -| CACHEABLE_CUSTOM_*N*_HOST | HOST/IP<br>FQDN | Optional <br><br> Required to support custom package repositories. Repositories could be hosted locally or on the internet. There's no limit to the number of custom hosts that can be configured.<br><br>Examples:<br>Name = CACHEABLE_CUSTOM_1_HOST Value = packages.foo.com<br> Name = CACHEABLE_CUSTOM_2_HOST Value = packages.bar.com | -| CACHEABLE_CUSTOM_*N*_CANONICAL | Alias | Optional <br><br> Required to support custom package repositories. This value can be used as an alias and will be used by the cache server to reference different DNS names. For example, repository content hostname may be packages.foo.com, but for different regions there could be an extra prefix that is added to the hostname like westuscdn.packages.foo.com and eastuscdn.packages.foo.com. By setting the canonical alias, you ensure that content isn't duplicated for content coming from the same host, but different CDN sources. The format of the canonical value isn't important, but it must be unique to the host. It may be easiest to set the value to match the host value.<br><br>Examples based on Custom Host examples above:<br>Name = CACHEABLE_CUSTOM_1_CANONICAL Value = foopackages<br> Name = CACHEABLE_CUSTOM_2_CANONICAL Value = packages.bar.com | -| IS_SUMMARY_PUBLIC | True or False | Optional <br><br> Enables viewing of the summary report on the local network or internet. Use of an API key (discussed later) is required to view the summary report if set to true. | -| IS_SUMMARY_ACCESS_UNRESTRICTED | True or False | Optional <br><br> Enables viewing of summary report on the local network or internet without use of API key from any device in the network. Use if you don't want to lock down access to viewing cache server summary data via the browser. | --## Module container create options --Container create options provide control of the settings related to storage and ports used by the Microsoft Connected Cache module. --Sample container create options: --```json -{ - "HostConfig": { - "Binds": [ - "/microsoftConnectedCache1/:/nginx/cache1/" - ], - "PortBindings": { - "8081/tcp": [ - { - "HostPort": "80" - } - ], - "5000/tcp": [ - { - "HostPort": "5100" - } - ] - } - } -} -``` --The following sections list the required container create variables used to deploy the MCC module. --### HostConfig --The `HostConfig` parameters are required to map the container storage location to the storage location on the disk. Up to nine locations can be specified. -->[!Note] ->The number of the drive must match the cache drive binding values specified in the environment variable STORAGE_*N*_SIZE_GB value, `/MicrosoftConnectedCache*N*/:/nginx/cache*N*/`. --### PortBindings --The `PortBindings` parameters map container ports to ports on the host device. --The first port binding specifies the external machine HTTP port that MCC listens on for content requests. The default HostPort is port 80 and other ports aren't supported at this time as the ADU client makes requests on port 80 today. TCP port 8081 is the internal container port that the MCC listens on and can't be changed. --The second port binding ensures that the container isn't listening on host port 5000. The Microsoft Connected Cache module has a .NET Core service, which is used by the caching engine for various functions. To support nested edge, the HostPort must not be set to 5000 because the registry proxy module is already listening on host port 5000. --## Microsoft Connected Cache summary report --The summary report is currently the only way for a customer to view caching data for the Microsoft Connected Cache instances deployed to IoT Edge gateways. The report is generated at 15-second intervals and includes averaged stats for the period and aggregated stats for the lifetime of the module. The key stats that customers will be interested in are: --* **hitBytes** - The sum of bytes delivered that came directly from cache. -* **missBytes** - The sum of bytes delivered that Microsoft Connected Cache had to download from CDN to see the cache. -* **eggressBytes** - The sum of hitBytes and missBytes and is the total bytes delivered to clients. -* **hitRatioBytes** - The ratio of hitBytes to egressBytes. For example, if 100% of eggressBytes delivered in a period were equal to the hitBytes, this value would be 1. ---The summary report is available at `http://<IoT Edge gateway>:5001/summary` Replace \<IoT Edge Gateway\> with the IP address or hostname of the IoT Edge gateway hosting the MCC module. |
iot-hub-device-update | Connected Cache Disconnected Device Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-disconnected-device-update.md | Title: Disconnected device update using Microsoft Connected Cache -description: Understand support for disconnected device update using Microsoft Connected Cache -+description: Understand how the Microsoft Connected Cache module for Azure IoT Edge enables updating disconnected device with Device Update for Azure IoT Hub + Previously updated : 08/19/2022 Last updated : 04/14/2023 -# Understand support for disconnected device updates +# Understand support for disconnected device updates (preview) ++The Microsoft Connected Cache (MCC) module for IoT Edge devices enables Device Update capabilities on disconnected devices behind gateways. In a transparent gateway scenario, one or more devices can pass their messages through a single gateway device that maintains the connection to Azure IoT Hub. In these cases, the child devices may not have internet connectivity or may not be allowed to download content from the internet. The MCC module provides Device Update for IoT Hub customers with the capability of an intelligent in-network cache. The cache enables image-based and package-based updates of Linux OS-based devices that are behind an IoT Edge gateway (also called *downstream* IoT devices). The cache also helps reduce the bandwidth used for updates. > [!NOTE] > This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available. -In a transparent gateway scenario, one or more devices can pass their messages through a single gateway device that maintains the connection to Azure IoT Hub. In these cases, the child devices may not have internet connectivity or may not be allowed to download content from the internet. The Microsoft Connected Cache preview IoT Edge module provides Device Update for IoT Hub customers with the capability of an intelligent in-network cache. The cache enables image-based and package-based updates of Linux OS-based devices behind an IoT Edge gateway (also called *downstream* IoT devices), and also helps reduce the bandwidth used for updates. +If you aren't familiar with IoT Edge gateways, learn more about [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md). -## Microsoft Connected Cache preview for Device Update for IoT Hub +## What is Microsoft Connected Cache -Microsoft Connected Cache is an intelligent, transparent cache for content published for Device Update for IoT Hub and can be customized to cache content from other sources like package repositories as well. Microsoft Connected Cache is a cold cache that is warmed by client requests for the exact file ranges requested by the Delivery Optimization client and doesn't pre-seed content. The diagram and step-by-step description below explains how Microsoft Connected Cache works within the Device Update infrastructure. +Microsoft Connected Cache is an intelligent, transparent cache for content published for Device Update for IoT Hub and can be customized to cache content from other sources like package repositories as well. Microsoft Connected Cache is a cold cache that is warmed by client requests for the exact file ranges requested by the Delivery Optimization client and doesn't pre-seed content. The following diagram and step-by-step description explain how Microsoft Connected Cache works within the Device Update infrastructure. >[!Note] >This flow assumes that the IoT Edge gateway has internet connectivity. For the downstream IoT Edge gateway (nested edge) scenario, the content delivery network (CDN) can be considered the MCC hosted on the parent IoT Edge gateway. - :::image type="content" source="media/connected-cache-overview/disconnected-device-update.png" alt-text="Disconnected Device Update" lightbox="media/connected-cache-overview/disconnected-device-update.png"::: -1. Microsoft Connected Cache is deployed as an IoT Edge module to the on-premises server. -2. Device Update for IoT Hub clients are configured to download content from Microsoft Connected Cache by virtue of either the GatewayHostName attribute of the device connection string for IoT leaf devices **or** the parent_hostname set in the config.toml for IoT Edge child devices. -3. Device Update for IoT Hub clients receive update content download commands from the Device Update service and request update content from the Microsoft Connected Cache instead of the CDN. Microsoft Connected Cache listens on HTTP port 80 by default, and the Delivery Optimization client makes the content request on port 80 so the parent must be configured to listen on this port. Only the HTTP protocol is supported at this time. +1. Microsoft Connected Cache is deployed as an IoT Edge module to the on-premises gateway server. +2. Device Update for IoT Hub clients are configured to download content from Microsoft Connected Cache by using either the GatewayHostName attribute of the device connection string for IoT leaf devices **or** the parent_hostname set in the config.toml for IoT Edge child devices. +3. Device Update for IoT Hub clients receive download commands from the Device Update service and request update content from the Microsoft Connected Cache instead of the CDN. Microsoft Connected Cache listens on HTTP port 80 by default, and the Delivery Optimization client makes the content request on port 80 so the parent must be configured to listen on this port. Only the HTTP protocol is supported at this time. 4. The Microsoft Connected Cache server downloads content from the CDN, seeds its local cache stored on disk and delivers the content to the Device Update client. >[!Note] >When using package-based updates, the Microsoft Connected Cache server will be configured by the admin with the required package hostname. -5. Subsequent requests from other Device Update clients for the same update content will now come from cache and Microsoft Connected Cache won't make requests to the CDN for the same content. +5. Subsequent requests from other Device Update clients for the same update content now come from cache and Microsoft Connected Cache won't make requests to the CDN for the same content. ### Supporting industrial IoT (IIoT) with parent/child hosting scenarios -When a downstream or child IoT Edge gateway is hosting a Microsoft Connected Cache server, it will be configured to request update content from the parent IoT Edge gateway, also hosting a Microsoft Connected Cache server. This request is repeated for as many levels as necessary before reaching the parent IoT Edge gateway hosting a Microsoft Connected Cache server that has internet access. From the internet connected server, the content is requested from the CDN at which point the content is delivered back to the child IoT Edge gateway that originally requested the content. The content will be stored on disk at every level. +Industrial IoT (IIoT) scenarios often involve multiple levels of IoT Edge gateways, with only the top level having internet access. In this scenario, each gateway hosts a Microsoft Connected Cache service that is configured to request update content from its parent gateway. ++When a child (or downstream) IoT Edge gateway makes a request for update content from its parent gateway, this request is repeated for as many levels as necessary before reaching the topmost IoT Edge gateway hosting a Microsoft Connected Cache server that has internet access. From the internet connected server, the content is requested from the CDN at which point the content is delivered back to the child IoT Edge gateway that originally requested the content. The content is stored on disk at every level. ## Request access to the preview The Microsoft Connected Cache IoT Edge module is released as a preview for customers who are deploying solutions using Device Update for IoT Hub. Access to the preview is by invitation. [Request Access](https://aka.ms/MCCForDeviceUpdateForIoT) to the Microsoft Connected Cache preview for Device Update for IoT Hub and provide the information requested if you would like access to the module.++## Microsoft Connected Cache module configuration ++Microsoft Connected Cache is deployed to Azure IoT Edge gateways as an IoT Edge module. Like other IoT Edge modules, environment variables and container create options are used to configure MCC modules. This section defines the environment variables and container create options that are required to successfully deploy the MCC module for use by Device Update for IoT Hub. ++There's no naming requirement for the Microsoft Connected Cache module since no other module or service interactions rely on the name of the MCC module for communication. Additionally, the parent-child relationship of the Microsoft Connected Cache servers isn't dependent on this module name, but rather the FQDN or IP address of the IoT Edge gateway. ++### Module environment variables ++Microsoft Connected Cache module environment variables are used to pass basic module identity information and functional module settings to the container. ++| Variable name | Value format | Description | +|--|--|--| +| CUSTOMER_ID | Azure subscription ID GUID | Required <br><br> This value is the customer's ID, which provides secure authentication of the cache node to Delivery Optimization services. | +| CACHE_NODE_ID | Cache node ID GUID | Required <br><br> Uniquely identifies the MCC node to Delivery Optimization services. | +| CUSTOMER_KEY | Customer Key GUID | Required <br><br> This value is the customer's key, which provides secure authentication of the cache node to Delivery Optimization services. | +| STORAGE_*N*_SIZE_GB (Where *N* is the cache drive) | Integer | Required <br><br> Specify up to nine drives to cache content and specify the maximum space in gigabytes to allocate for content on each cache drive. The number of the drive must match the cache drive binding values specified in the container create option MicrosoftConnectedCache*N* value.<br><br>Examples:<br>STORAGE_1_SIZE_GB = 150<br>STORAGE_2_SIZE_GB = 50<br><br>Minimum size of the cache is 10 GB. | +| UPSTREAM_HOST | FQDN/IP | Optional <br><br> This value can specify an upstream MCC node that acts as a proxy if the Connected Cache node is disconnected from the internet. This setting is used to support the nested IoT scenario.<br><br>**Note:** MCC listens on http default port 80. | +| UPSTREAM_PROXY | FQDN/IP:PORT | Optional <br><br> The outbound internet proxy. This value could also be the OT DMZ proxy of an ISA 95 network. | +| CACHEABLE_CUSTOM_*N*_HOST | HOST/IP<br>FQDN | Optional <br><br> Required to support custom package repositories. Repositories could be hosted locally or on the internet. There's no limit to the number of custom hosts that can be configured.<br><br>Examples:<br>Name = CACHEABLE_CUSTOM_1_HOST Value = packages.foo.com<br> Name = CACHEABLE_CUSTOM_2_HOST Value = packages.bar.com | +| CACHEABLE_CUSTOM_*N*_CANONICAL | Alias | Optional <br><br> Required to support custom package repositories. This value can be used as an alias and will be used by the cache server to reference different DNS names. For example, repository content hostname may be packages.foo.com, but for different regions there could be an extra prefix that is added to the hostname like westuscdn.packages.foo.com and eastuscdn.packages.foo.com. By setting the canonical alias, you ensure that content isn't duplicated for content coming from the same host, but different CDN sources. The format of the canonical value isn't important, but it must be unique to the host. It may be easiest to set the value to match the host value.<br><br>Examples based on the previous custom host examples:<br>Name = CACHEABLE_CUSTOM_1_CANONICAL Value = foopackages<br> Name = CACHEABLE_CUSTOM_2_CANONICAL Value = packages.bar.com | +| IS_SUMMARY_PUBLIC | True or False | Optional <br><br> Enables viewing of the summary report on the local network or internet. Use of an API key (discussed later) is required to view the summary report if set to true. | +| IS_SUMMARY_ACCESS_UNRESTRICTED | True or False | Optional <br><br> Enables viewing of summary report on the local network or internet without use of API key from any device in the network. Use if you don't want to lock down access to viewing cache server summary data via the browser. | ++### Module container create options ++Container create options provide control of the settings related to storage and ports used by the Microsoft Connected Cache module. ++Sample container create options: ++```json +{ + "HostConfig": { + "Binds": [ + "/microsoftConnectedCache1/:/nginx/cache1/" + ], + "PortBindings": { + "8081/tcp": [ + { + "HostPort": "80" + } + ], + "5000/tcp": [ + { + "HostPort": "5100" + } + ] + } + } +} +``` ++The following sections list the required container create variables used to deploy the MCC module. ++#### HostConfig ++The `HostConfig` parameters are required to map the container storage location to the storage location on the disk. Up to nine locations can be specified. ++>[!Note] +>The number of the drive must match the cache drive binding values specified in the environment variable STORAGE_*N*_SIZE_GB value, `/MicrosoftConnectedCache*N*/:/nginx/cache*N*/`. ++#### PortBindings ++The `PortBindings` parameters map container ports to ports on the host device. ++The first port binding specifies the external machine HTTP port that MCC listens on for content requests. The default HostPort is port 80 and other ports aren't supported at this time as the ADU client makes requests on port 80 today. TCP port 8081 is the internal container port that the MCC listens on and can't be changed. ++The second port binding ensures that the container isn't listening on host port 5000. The Microsoft Connected Cache module has a .NET Core service, which is used by the caching engine for various functions. To support nested edge, the HostPort must not be set to 5000 because the registry proxy module is already listening on host port 5000. ++## Microsoft Connected Cache summary report ++The summary report is currently the only way for a customer to view caching data for the Microsoft Connected Cache instances deployed to IoT Edge gateways. The report is generated at 15-second intervals and includes averaged stats for the period and aggregated stats for the lifetime of the module. The key stats that the report provides are: ++* **hitBytes** - The sum of bytes delivered that came directly from cache. +* **missBytes** - The sum of bytes delivered that Microsoft Connected Cache had to download from CDN to see the cache. +* **eggressBytes** - The sum of hitBytes and missBytes and is the total bytes delivered to clients. +* **hitRatioBytes** - The ratio of hitBytes to egressBytes. For example, if 100% of eggressBytes delivered in a period were equal to the hitBytes, this value would be 1. ++The summary report is available at `http://<IoT Edge gateway>:5001/summary` Replace \<IoT Edge Gateway\> with the IP address or hostname of the IoT Edge gateway hosting the MCC module. ++## Next steps ++Learn how to implement Microsoft Connected Cache in [single gateways](./connected-cache-single-level.md) or [nested and industrial IoT gateways](./connected-cache-nested-level.md). |
iot-hub-device-update | Connected Cache Industrial Iot Nested | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-industrial-iot-nested.md | - Title: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration -description: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration tutorial -- Previously updated : 2/16/2021-----# Microsoft Connected Cache preview deployment scenario sample: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration --> [!NOTE] -> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available. --Manufacturing networks are often organized in hierarchical layers following the [Purdue network model](https://en.wikipedia.org/wiki/Purdue_Enterprise_Reference_Architecture) (included in the [ISA 95](https://en.wikipedia.org/wiki/ANSI/ISA-95) and [ISA 99](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa99) standards). In these networks, only the top layer has connectivity to the cloud and the lower layers in the hierarchy can only communicate with adjacent north and south layers. --This GitHub sample, [Azure IoT Edge for Industrial IoT](https://github.com/Azure-Samples/iot-edge-for-iiot), deploys the following: --* Simulated Purdue network in Azure -* Industrial assets -* Hierarchy of Azure IoT Edge gateways - -These components will be used to acquire industrial data and securely upload it to the cloud without compromising the security of the network. Microsoft Connected Cache can be deployed to support the download of content at all levels within the ISA 95 compliant network. --The key to configuring Microsoft Connected Cache deployments within an ISA 95 compliant network is configuring both the OT proxy *and* the upstream host at the L3 IoT Edge gateway. --1. Configure Microsoft Connected Cache deployments at the L5 and L4 levels as described in the Two-Level Nested IoT Edge gateway sample -2. The deployment at the L3 IoT Edge gateway must specify: - - * UPSTREAM_HOST - The IP/FQDN of the L4 IoT Edge gateway, which the L3 Microsoft Connected Cache will request content. - * UPSTREAM_PROXY - The IP/FQDN:PORT of the OT proxy server. --3. The OT proxy must add the L4 MCC FQDN/IP address to the allowlist. --To validate that Microsoft Connected Cache is functioning properly, execute the following command in the terminal of the IoT Edge device, hosting the module, or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. (see environment variable details for information on visibility of this report). --```bash - wget http://<L3 IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com -``` |
iot-hub-device-update | Connected Cache Nested Level | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-nested-level.md | Title: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy + Title: Deploy Microsoft Connected Cache on nested gateways -description: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy tutorial -+description: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy + Previously updated : 2/16/2021 Last updated : 04/14/2023 -# Microsoft Connected Cache preview deployment scenario sample: Two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy +# Deploy the Microsoft Connected Cache module on nested gateways, including in IIoT scenarios (preview) ++The Microsoft Connected Cache module supports nested, or hierarchical gateways, in which one or more IoT Edge gateway devices are behind a single gateway that has access to the internet. This article describes a deployment scenario sample that has two nested Azure IoT Edge gateway devices (a *parent gateway* and a *child gateway*) with outbound unauthenticated proxy. > [!NOTE] > This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available. -The diagram below describes the scenario where one Azure IoT Edge gateway has direct access to CDN resources and is acting as the parent to another Azure IoT Edge gateway. The child IoT Edge gateway is acting as the parent to an Azure IoT leaf device such as a Raspberry Pi. Both the Azure IoT Edge child and Azure IoT device are internet isolated. The example below demonstrates the configuration for two-levels of Azure IoT Edge gateways, but there is no limit to the depth of upstream hosts that Microsoft Connected Cache will support. There is no difference in Microsoft Connected Cache container create options from the previous examples. +The following diagram describes the scenario where one Azure IoT Edge gateway has direct access to CDN resources and is acting as the parent to another Azure IoT Edge gateway. The child IoT Edge gateway is acting as the parent to an IoT leaf device such as a Raspberry Pi. Both the IoT Edge child gateway and the IoT device are internet isolated. This example demonstrates the configuration for two levels of Azure IoT Edge gateways, but there's no limit to the depth of upstream hosts that Microsoft Connected Cache will support. + -Refer to the documentation [Connect downstream IoT Edge devices - Azure IoT Edge](../iot-edge/how-to-connect-downstream-iot-edge-device.md?preserve-view=true&tabs=azure-portal&view=iotedge-2020-11) for more details on configuring layered deployments of Azure IoT Edge gateways. Additionally note that when deploying Azure IoT Edge, Microsoft Connected Cache, and custom modules, all modules must reside in the same container registry. +Refer to the documentation [Connect downstream IoT Edge devices](../iot-edge/how-to-connect-downstream-iot-edge-device.md) for more details on configuring layered deployments of Azure IoT Edge gateways. Additionally note that when deploying Azure IoT Edge, Microsoft Connected Cache, and custom modules, all modules must reside in the same container registry. >[!Note] >When deploying Azure IoT Edge, Microsoft Connected Cache, and custom modules, all modules must reside in the same container registry. - :::image type="content" source="media/connected-cache-overview/nested-level-proxy.png" alt-text="Microsoft Connected Cache Nested" lightbox="media/connected-cache-overview/nested-level-proxy.png"::: - ## Parent gateway configuration-1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub (see [Support for Disconnected Devices](connected-cache-disconnected-device-update.md) for details on how to get the module). -2. Add the environment variables for the deployment. Below is an example of the environment variables. -- **Environment Variables** -- | Name | Value | - | -- | -| - | CACHE_NODE_ID | See [environment variable](connected-cache-configure.md) descriptions | - | CUSTOMER_ID | See [environment variable](connected-cache-configure.md) descriptions | - | CUSTOMER_KEY | See [environment variable](connected-cache-configure.md) descriptions | - | STORAGE_1_SIZE_GB | 10 | - | CACHEABLE_CUSTOM_1_HOST | Packagerepo.com:80 | - | CACHEABLE_CUSTOM_1_CANONICAL | Packagerepo.com | - | IS_SUMMARY_ACCESS_UNRESTRICTED| true | --3. Add the container create options for the deployment. There is no difference in MCC container create options from the previous example. Below is an example of the container create options. --### Container create options --```json -{ - "HostConfig": { - "Binds": [ - "/MicrosoftConnectedCache1/:/nginx/cache1/" - ], - "PortBindings": { - "8081/tcp": [ - { - "HostPort": "80" - } - ], - "5000/tcp": [ - { - "HostPort": "5100" - } - ] - } - } -} -``` ++Use the following steps to configure the Microsoft Connected Cache module on the parent gateway device. ++1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub (see [Support for disconnected devices](connected-cache-disconnected-device-update.md) for details on how to request access to the preview module). +2. Add the environment variables for the deployment. The following table is an example of the environment variables: ++ | Name | Value | + | - | -- | + | CACHE_NODE_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | CUSTOMER_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | CUSTOMER_KEY | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | STORAGE_1_SIZE_GB | 10 | + | CACHEABLE_CUSTOM_1_HOST | Packagerepo.com:80 | + | CACHEABLE_CUSTOM_1_CANONICAL | Packagerepo.com | + | IS_SUMMARY_ACCESS_UNRESTRICTED | true | ++3. Add the container create options for the deployment. There's no difference in MCC container create options for single or nested gateways. The following example shows the container create options for the MCC module: ++ ```json + { + "HostConfig": { + "Binds": [ + "/MicrosoftConnectedCache1/:/nginx/cache1/" + ], + "PortBindings": { + "8081/tcp": [ + { + "HostPort": "80" + } + ], + "5000/tcp": [ + { + "HostPort": "5100" + } + ] + } + } + } + ``` ## Child gateway configuration +Use the following steps to configure the Microsoft Connected Cache module on the child gateway device. + >[!Note]->If you have replicated containers used in your configuration in your own private registry, then there will need to be a modification to the config.toml settings and runtime settings in your module deployment. For more information, refer to [Connect downstream IoT Edge devices - Azure IoT Edge](../iot-edge/how-to-connect-downstream-iot-edge-device.md?preserve-view=true&tabs=azure-portal&view=iotedge-2020-11#deploy-modules-to-lower-layer-devices) for more details. +>If you have replicated containers used in your configuration in your own private registry, then there will need to be a modification to the config.toml settings and runtime settings in your module deployment. For more information, see [Connect downstream IoT Edge devices](../iot-edge/how-to-connect-downstream-iot-edge-device.md#deploy-modules-to-lower-layer-devices). +1. Modify the image path for the IoT Edge agent as demonstrated in the example below: -1. Modify the image path for the Edge agent as demonstrated in the example below: + ```markdown + [agent] + name = "edgeAgent" + type = "docker" + env = {} + [agent.config] + image = "<parent_device_fqdn_or_ip>:8000/iotedge/azureiotedge-agent:1.2.0-rc2" + auth = {} + ``` - ```markdown - [agent] - name = "edgeAgent" - type = "docker" - env = {} - [agent.config] - image = "<parent_device_fqdn_or_ip>:8000/iotedge/azureiotedge-agent:1.2.0-rc2" - auth = {} - ``` -2. Modify the Edge Hub and Edge agent Runtime Settings in the Azure IoT Edge deployment as demonstrated in this example: - - * Under Edge Hub, in the image field, enter ```$upstream:8000/iotedge/azureiotedge-hub:1.2.0-rc2``` - * Under Edge Agent, in the image field, enter ```$upstream:8000/iotedge/azureiotedge-agent:1.2.0-rc2``` +2. Modify the IoT Edge hub and agent runtime settings in the IoT Edge deployment as demonstrated in this example: ++ * For the IoT Edge hub image, enter `$upstream:8000/iotedge/azureiotedge-hub:1.2.0-rc2` + * For the IoT Edge agent image, enter `$upstream:8000/iotedge/azureiotedge-agent:1.2.0-rc2` 3. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub. - * Choose a name for your module: ```ConnectedCache``` - * Modify the Image URI: ```$upstream:8000/mcc/linux/iot/mcc-ubuntu-iot-amd64:latest``` + * Choose a name for your module: `ConnectedCache` + * Modify the image URI: `$upstream:8000/mcc/linux/iot/mcc-ubuntu-iot-amd64:latest` 4. Add the same set of environment variables and container create options used in the parent deployment.->[!Note] ->The CACHE_NODE_ID shoudl be unique. The CUSTOMER_ID and CUSTOMER_KEY values will be identical to the parent. (see [Configure Microsoft Connected Cache](connected-cache-configure.md) -For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. (see environment variable details for information on visibility of this report). + >[!Note] + >The CACHE_NODE_ID should be unique. The CUSTOMER_ID and CUSTOMER_KEY values will be identical to the parent. For more information, see [Module environment variables](connected-cache-disconnected-device-update.md#module-environment-variables). ++For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. For information on the visibility of this report, see [Microsoft Connected Cache summary report](./connected-cache-disconnected-device-update.md#microsoft-connected-cache-summary-report). ++```bash +wget http://<CHILD Azure IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com +``` ++## Industrial IoT (IIoT) configuration ++Manufacturing networks are often organized in hierarchical layers following the [Purdue network model](https://en.wikipedia.org/wiki/Purdue_Enterprise_Reference_Architecture) (included in the [ISA 95](https://en.wikipedia.org/wiki/ANSI/ISA-95) and [ISA 99](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa99) standards). In these networks, only the top layer has connectivity to the cloud and the lower layers in the hierarchy can only communicate with adjacent north and south layers. ++This GitHub sample, [Azure IoT Edge for Industrial IoT](https://github.com/Azure-Samples/iot-edge-for-iiot), deploys the following components: ++* Simulated Purdue network in Azure +* Industrial assets +* Hierarchy of Azure IoT Edge gateways + +These components will be used to acquire industrial data and securely upload it to the cloud without compromising the security of the network. Microsoft Connected Cache can be deployed to support the download of content at all levels within the ISA 95 compliant network. ++The key to configuring Microsoft Connected Cache deployments within an ISA 95 compliant network is configuring both the OT proxy *and* the upstream host at the L3 IoT Edge gateway. ++1. Configure Microsoft Connected Cache deployments at the L5 and L4 levels as described in the Two-Level Nested IoT Edge gateway sample +2. The deployment at the L3 IoT Edge gateway must specify: ++ * UPSTREAM_HOST - The IP/FQDN of the L4 IoT Edge gateway, which the L3 Microsoft Connected Cache will request content. + * UPSTREAM_PROXY - The IP/FQDN:PORT of the OT proxy server. ++3. The OT proxy must add the L4 MCC FQDN/IP address to the allowlist. ++To validate that Microsoft Connected Cache is functioning properly, execute the following command in the terminal of the IoT Edge device hosting the module, or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. For information on the visibility of this report, see [Microsoft Connected Cache summary report](./connected-cache-disconnected-device-update.md#microsoft-connected-cache-summary-report). ```bash- wget http://<CHILD Azure IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com -``` +wget http://<L3 IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com +``` |
iot-hub-device-update | Connected Cache Single Level | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-single-level.md | Title: Microsoft Connected Cache preview deployment scenario samples + Title: Deploy Microsoft Connected Cache on a gateway -description: Microsoft Connected Cache preview deployment scenario samples tutorials -+description: Update disconnected devices with Device Update using the Microsoft Connected Cache module on IoT Edge gateways + Previously updated : 2/16/2021 Last updated : 04/14/2023 -# Microsoft Connected Cache preview deployment scenario samples +# Deploy the Microsoft Connected Cache module on a single gateway (preview) ++The Microsoft Connected Cache (MCC) module for IoT Edge gateways enables Device Update for disconnected devices behind the gateway. This article introduces two different configurations for deploying the MCC module on an IoT Edge gateway. ++If you have multiple IoT Edge gateways chained together, refer to the instructions in [Deploy the Microsoft Connected Cache module on nested gateways](./connected-cache-nested-level.md). > [!NOTE] > This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available. -## Single level Azure IoT Edge gateway no proxy --The diagram below describes the scenario where an Azure IoT Edge gateway that has direct access to CDN resources and there is an Azure IoT leaf device such as a Raspberry PI that is an internet isolated child devices of the Azure IoT Edge gateway. +## Deploy to a gateway with no proxy - :::image type="content" source="media/connected-cache-overview/disconnected-device-update.png" alt-text="Microsoft Connected Cache Disconnected Device Update" lightbox="media/connected-cache-overview/disconnected-device-update.png"::: +The following diagram describes the scenario where an Azure IoT Edge gateway has direct access to content deliver network (CDN) resources, and has the Microsoft Connected Cache module deployed on it. Behind the gateway, there's an IoT leaf device such as a Raspberry PI that is an internet isolated child device of the IoT Edge gateway. -1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub (see [Support for Disconnected Devices](connected-cache-disconnected-device-update.md) for details on how to get the module). -2. Add the environment variables for the deployment. Below is an example of the environment variables. - **Environment Variables** - - | Name | Value | - | -- | -| - | CACHE_NODE_ID | See [environment variable](connected-cache-configure.md) descriptions | - | CUSTOMER_ID | See [environment variable](connected-cache-configure.md) descriptions | - | CUSTOMER_KEY | See [environment variable](connected-cache-configure.md) descriptions | - | STORAGE_1_SIZE_GB | 10 | --3. Add the container create options for the deployment. Below is an example of the container create options. --### Container create options --```json -{ - "HostConfig": { - "Binds": [ - "/MicrosoftConnectedCache1/:/nginx/cache1/" - ], - "PortBindings": { - "8081/tcp": [ - { - "HostPort": "80" - } - ], - "5000/tcp": [ - { - "HostPort": "5100" - } - ] - } - } -} -``` +The following steps are an example of configuring the MCC environment variables to connect directly to the CDN with no proxy: -For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. (see environment variable details for information on visibility of this report). +1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub (see [Support for Disconnected Devices](connected-cache-disconnected-device-update.md) for details on how to get the module). +2. Add the environment variables for the deployment. The following table is an example of the environment variables: ++ | Name | Value | + | -- | -- | + | CACHE_NODE_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | CUSTOMER_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | CUSTOMER_KEY | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | STORAGE_1_SIZE_GB | 10 | ++3. Add the container create options for the deployment. For example: ++ ```json + { + "HostConfig": { + "Binds": [ + "/MicrosoftConnectedCache1/:/nginx/cache1/" + ], + "PortBindings": { + "8081/tcp": [ + { + "HostPort": "80" + } + ], + "5000/tcp": [ + { + "HostPort": "5100" + } + ] + } + } + } + ``` ++For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. For information on the visibility of this report, see [Microsoft Connected Cache summary report](./connected-cache-disconnected-device-update.md#microsoft-connected-cache-summary-report). ```bash- wget http://<IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com +wget http://<IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com ``` -## Single level Azure IoT Edge gateway with outbound unauthenticated proxy +## Deploy to a gateway with outbound unauthenticated proxy ++In this scenario, an Azure IoT Edge Gateway has access to content delivery network (CDN) resources through an outbound unauthenticated proxy. Microsoft Connected Cache is configured to cache content from a custom repository and the summary report is visible to anyone on the network. -In this scenario there is an Azure IoT Edge Gateway that has access to CDN resources through an outbound unauthenticated proxy. Microsoft Connected Cache is being configured to cache content from a custom repository and the summary report has been made visible to anyone on the network. Below is an example of the MCC environment variables that would be set. - :::image type="content" source="media/connected-cache-overview/single-level-proxy.png" alt-text="Microsoft Connected Cache Single Level Proxy" lightbox="media/connected-cache-overview/single-level-proxy.png"::: +The following steps are an example of configuring the MCC environment variables to support an outbound unauthenticated proxy: 1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub. 2. Add the environment variables for the deployment. Below is an example of the environment variables. - **Environment Variables** -- | Name | Value | - | -- | -| - | CACHE_NODE_ID | See [environment variable](connected-cache-configure.md) descriptions | - | CUSTOMER_ID | See [environment variable](connected-cache-configure.md) descriptions | - | CUSTOMER_KEY | See [environment variable](connected-cache-configure.md) descriptions | - | STORAGE_1_SIZE_GB | 10 | - | CACHEABLE_CUSTOM_1_HOST | Packagerepo.com:80 | - | CACHEABLE_CUSTOM_1_CANONICAL | Packagerepo.com | - | IS_SUMMARY_ACCESS_UNRESTRICTED| true | - | UPSTREAM_PROXY | Your proxy server IP or FQDN | --3. Add the container create options for the deployment. There is no difference in MCC container create options from the previous example. Below is an example of the container create options. --### Container create options --```json -{ - "HostConfig": { - "Binds": [ - "/MicrosoftConnectedCache1/:/nginx/cache1/" - ], - "PortBindings": { - "8081/tcp": [ - { - "HostPort": "80" - } - ], - "5000/tcp": [ - { - "HostPort": "5100" - } - ] - } - } -} -``` --For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the Azure IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. (see environment variable details for information on visibility of this report). + | Name | Value | + | -- | - | + | CACHE_NODE_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | CUSTOMER_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | CUSTOMER_KEY | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions | + | STORAGE_1_SIZE_GB | 10 | + | CACHEABLE_CUSTOM_1_HOST | Packagerepo.com:80 | + | CACHEABLE_CUSTOM_1_CANONICAL | Packagerepo.com | + | IS_SUMMARY_ACCESS_UNRESTRICTED| true | + | UPSTREAM_PROXY | Your proxy server IP or FQDN | ++3. Add the container create options for the deployment. For example: ++ ```json + { + "HostConfig": { + "Binds": [ + "/MicrosoftConnectedCache1/:/nginx/cache1/" + ], + "PortBindings": { + "8081/tcp": [ + { + "HostPort": "80" + } + ], + "5000/tcp": [ + { + "HostPort": "5100" + } + ] + } + } + } + ``` ++For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the Azure IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. For information on the visibility of this report, see [Microsoft Connected Cache summary report](./connected-cache-disconnected-device-update.md#microsoft-connected-cache-summary-report). ```bash- wget http://<Azure IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com +wget http://<Azure IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com ``` |
iot | Iot Device Sdks Lifecycle And Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-device-sdks-lifecycle-and-support.md | + + Title: Azure IoT device SDKs lifecycle and support +description: Describe the lifecycle and support for our IoT Hub and DPS device SDKs +++++ Last updated : 4/13/2023+++# Azure IoT Device SDK lifecycle and support ++The article describes the Azure IoT Device SDK lifecycle and support policy. For more information, see [Azure SDK Lifecycle and support policy](https://azure.github.io/azure-sdk/policies_support.html). ++## Package lifecycle ++The releases fall into the following categories, each with a defined support structure. +1. **Beta** - Also known as Preview or Release Candidate. Available for early access and feedback purposes and **is not recommended** for use in production. The preview version support is limited to GitHub issues. Preview releases typically live for less than six months, after which they're either deprecated or released as active. ++1. **Active** - Generally available and fully supported, receives new feature updates, as well as bug and security fixes. We recommend that customers use the **latest version** because that version receives fixes and updates. ++1. **Deprecated** - Superseded by a more recent release. Deprecation occurs at the same time the new release becomes active. Deprecated releases address the most critical bug fixes and security fixes for another **12 months**. ++## Get support ++If you experience problems while using the Azure IoT SDKs, there are several ways to seek support: ++* **Reporting bugs** - All customers can report bugs on the issues page for the GitHub repository associated with the relevant SDK. ++* **Microsoft Customer Support team** - Users who have a [support plan](https://azure.microsoft.com/support/plans/) can engage the Microsoft Customer Support team by creating a support ticket directly from the [Azure portal](https://portal.azure.com/signin/index/?feature.settingsportalinstance=mpac). |
iot | Iot Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-introduction.md | The Azure Internet of Things (IoT) is a collection of Microsoft-managed cloud se The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the key groups of components: devices, IoT cloud services, other cloud services, and solution-wide concerns. Other articles in this section provide more detail on each of these components. ## IoT devices |
iot | Iot Overview Analyze Visualize | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-analyze-visualize.md | + + Title: Analyze and visualize your IoT data +description: An overview of the available options to analyze and visualize data in an IoT solution. +++++ Last updated : 04/11/2023+++# As a solution builder, I want a high-level overview of the options for analyzing and visualizing device data in an IoT solution. +++# Analyze and visualize your IoT data ++This overview introduces the key concepts around the options to analyze and visualize your IoT data. Each section includes links to content that provides further detail and guidance. ++The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the areas relevant to analyzing and visualizing your IoT data. +++In A |