Updates from: 04/15/2023 01:12:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policies Series Call Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-call-rest-api.md
You need to deploy an app, which will serve as your external app. Your custom po
"code" : "errorCode", "requestId": "requestId", "userMessage" : "The access code you entered is incorrect. Please try again.",
- "developerMessage" : `The The provided code ${req.body.accessCode} does not match the expected code for user.`,
+ "developerMessage" : `The provided code ${req.body.accessCode} does not match the expected code for user.`,
"moreInfo" :"https://docs.microsoft.com/en-us/azure/active-directory-b2c/string-transformations" }; res.status(409).send(errorResponse);
You need to deploy an app, which will serve as your external app. Your custom po
"code": "errorCode", "requestId": "requestId", "userMessage": "The access code you entered is incorrect. Please try again.",
- "developerMessage": "The The provided code 54321 does not match the expected code for user.",
+ "developerMessage": "The provided code 54321 does not match the expected code for user.",
"moreInfo": "https://docs.microsoft.com/en-us/azure/active-directory-b2c/string-transformations" } ```
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Previously updated : 04/13/2023 Last updated : 04/14/2023
This article uses the following terms:
* Single sign-on (SSO) - The ability for a user to sign-on once and access all SSO enabled applications. In the context of user provisioning, SSO is a result of users having a single account to access all systems that use automatic user provisioning.
-* Source system - The repository of users that the Azure AD provisions from. Azure AD is the source system for most pre-integrated provisioning connectors. However, there are some exceptions for cloud applications such as SAP, Workday, and AWS. For example, see [User provisioning from Workday to AD](../saas-apps/workday-inbound-tutorial.md).
+* Source system - The repository of users that the Azure AD provisions from. Azure AD is the source system for most preintegrated provisioning connectors. However, there are some exceptions for cloud applications such as SAP, Workday, and AWS. For example, see [User provisioning from Workday to AD](../saas-apps/workday-inbound-tutorial.md).
* Target system - The repository of users that the Azure AD provisions to. The Target system is typically a SaaS application such as ServiceNow, Zscaler, and Slack. The target system can also be an on-premises system such as AD.
Use the Azure portal to view and manage all the applications that support provis
### Determine the type of connector to use
-The actual steps required to enable and configure automatic provisioning vary depending on the application. If the application you wish to automatically provision is listed in the [Azure AD SaaS app gallery](../saas-apps/tutorial-list.md), then you should select the [app-specific integration tutorial](../saas-apps/tutorial-list.md) to configure its pre-integrated user provisioning connector.
+The actual steps required to enable and configure automatic provisioning vary depending on the application. If the application you wish to automatically provision is listed in the [Azure AD SaaS app gallery](../saas-apps/tutorial-list.md), then you should select the [app-specific integration tutorial](../saas-apps/tutorial-list.md) to configure its preintegrated user provisioning connector.
If not, follow the steps:
-1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a pre-integrated user provisioning connector. Our team works with you and the application developer to onboard your application to our platform if it supports SCIM.
+1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a preintegrated user provisioning connector. Our team works with you and the application developer to onboard your application to our platform if it supports SCIM.
-1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. Using SCIM is a requirement for Azure AD to provision users to the app without a pre-integrated provisioning connector.
+1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. Using SCIM is a requirement for Azure AD to provision users to the app without a preintegrated provisioning connector.
1. If the application is able to utilize the BYOA SCIM connector, then refer to [BYOA SCIM integration tutorial](../app-provisioning/use-scim-to-provision-users-and-groups.md) to configure the BYOA SCIM connector for the application.
Before implementing automatic user provisioning, you must determine the users an
### Define user and group attribute mapping
-To implement automatic user provisioning, you need to define the user and group attributes that are needed for the application. There's a pre-configured set of attributes and [attribute-mappings](../app-provisioning/configure-automatic-user-provisioning-portal.md) between Azure AD user objects, and each SaaS applicationΓÇÖs user objects. Not all SaaS apps enable group attributes.
+To implement automatic user provisioning, you need to define the user and group attributes that are needed for the application. There's a preconfigured set of attributes and [attribute-mappings](../app-provisioning/configure-automatic-user-provisioning-portal.md) between Azure AD user objects, and each SaaS applicationΓÇÖs user objects. Not all SaaS apps enable group attributes.
Azure AD supports by direct attribute-to-attribute mapping, providing constant values, or [writing expressions for attribute mappings](../app-provisioning/functions-for-customizing-application-data.md). This flexibility gives you fine control over what is populated in the targeted system's attribute. You can use [Microsoft Graph API](../app-provisioning/export-import-provisioning-configuration.md) and Graph Explorer to export your user provisioning attribute mappings and schema to a JSON file and import it back into Azure AD.
First, configure automatic user provisioning for the application. Then run test
| Scenarios| Expected results | | - | - |
-| User is added to a group assigned to the target system | User object is provisioned in target system. <br>User can sign-in to target system and perform the desired actions. |
-| User is removed from a group that is assigned to target system | User object is deprovisioned in the target system.<br>User can't sign-in to target system. |
-| User information is updated in Azure AD by any method | Updated user attributes are reflected in target system after an incremental cycle |
-| User is out of scope | User object is disabled or deleted. <br>Note: This behavior is overridden for [Workday provisioning](skip-out-of-scope-deletions.md). |
+| User is added to a group assigned to the target system. | User object is provisioned in target system. <br>User can sign-in to target system and perform the desired actions. |
+| User is removed from a group that is assigned to target system. | User object is deprovisioned in the target system.<br>User can't sign-in to target system. |
+| User information updates in Azure AD by any method. | Updated user attributes reflect in the target system after an incremental cycle. |
+| User is out of scope. | User object is disabled or deleted. <br>Note: This behavior is overridden for [Workday provisioning](skip-out-of-scope-deletions.md). |
### Plan security
The provisioning service stores the state of both systems after the initial cycl
### Configure automatic user provisioning
-Use the [Azure portal](https://portal.azure.com/) to manage automatic user account provisioning and de-provisioning for applications that support it. Follow the steps in [How do I set up automatic provisioning to an application?](../app-provisioning/user-provisioning.md)
+Use the [Azure portal](https://portal.azure.com/) to manage automatic user account provisioning and deprovisioning for applications that support it. Follow the steps in [How do I set up automatic provisioning to an application?](../app-provisioning/user-provisioning.md)
The Azure AD user provisioning service can also be configured and managed using the [Microsoft Graph API](/graph/api/resources/synchronization-overview).
After a successful [initial cycle](../app-provisioning/user-provisioning.md), th
* The service is manually stopped, and a new initial cycle is triggered using the [Azure portal](https://portal.azure.com/), or using the appropriate [Microsoft Graph API](/graph/api/resources/synchronization-overview) command.
-* A new initial cycle is triggered by a change in attribute mappings or scoping filters.
+* A new initial cycle triggers a change in attribute mappings or scoping filters.
-* The provisioning process goes into quarantine due to a high error rate and stays in quarantine for more than four weeks then it is automatically disabled.
+* The provisioning process goes into quarantine due to a high error rate and stays in quarantine for more than four weeks then it's automatically disabled.
To review these events, and all other activities performed by the provisioning service, refer to Azure AD [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context).
To understand how long the provisioning cycles take and monitor the progress of
### Gain insights from reports
-Azure AD can provide [additional insights](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) into your organizationΓÇÖs user provisioning usage and operational health through audit logs and reports.
+Azure AD can provide more insights into your organizationΓÇÖs user provisioning usage and operational health through audit logs and reports. To learn more about user insights, see [Check the status of user provisioning](application-provisioning-when-will-provisioning-finish-specific-user.md).
Admins should check the provisioning summary report to monitor the operational health of the provisioning job. All activities performed by the provisioning service are recorded in the Azure AD audit logs. See [Tutorial: Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
For an identity provider to know that a user has access to a particular app, bot
* Decide if you want to allow users to sign in only if they belong to your organization. This architecture is known as a single-tenant application. Or, you can allow users to sign in by using any work or school account, which is known as a multi-tenant application. You can also allow personal Microsoft accounts or a social account from LinkedIn, Google, and so on. * Request scope permissions. For example, you can request the "user.read" scope, which grants permission to read the profile of the signed-in user. * Define scopes that define access to your web API. Typically, when an app wants to access your API, it will need to request permissions to the scopes you define.
-* Share a secret with the Microsoft identity platform that proves the app's identity. Using a secret is relevant in the case where the app is a confidential client application. A confidential client application is an application that can hold credentials securely. A trusted back-end server is required to store the credentials.
+* Share a secret with the Microsoft identity platform that proves the app's identity. Using a secret is relevant in the case where the app is a confidential client application. A confidential [client application](developer-glossary.md#client-application) is an application that can hold credentials securely, like a [web client](developer-glossary.md#web-client). A trusted back-end server is required to store the credentials.
-After the app is registered, it's given a unique identifier that it shares with the Microsoft identity platform when it requests tokens. If the app is a [confidential client application](developer-glossary.md#client-application), it will also share the secret or the public key depending on whether certificates or secrets were used.
+After the app is registered, it's given a unique identifier that it shares with the Microsoft identity platform when it requests tokens. If the app is a confidential client application, it will also share the secret or the public key depending on whether certificates or secrets were used.
The Microsoft identity platform represents applications by using a model that fulfills two main functions:
The Microsoft identity platform:
* Provides infrastructure for implementing app provisioning within the app developer's tenant, and to any other Azure AD tenant. * Handles user consent during token request time and facilitates the dynamic provisioning of apps across tenants.
-*Consent* is the process of a resource owner granting authorization for a client application to access protected resources, under specific permissions, on behalf of the resource owner. The Microsoft identity platform enables:
+[*Consent*](developer-glossary.md#consent) is the process of a resource owner granting authorization for a client application to access protected resources, under specific permissions, on behalf of the resource owner. The Microsoft identity platform enables:
* Users and administrators to dynamically grant or deny consent for the app to access resources on their behalf. * Administrators to ultimately decide what apps are allowed to do and which users can use specific apps, and how the directory resources are accessed. ## Multi-tenant apps
-In the Microsoft identity platform, an [application object](developer-glossary.md#application-object) describes an application. At deployment time, the Microsoft identity platform uses the application object as a blueprint to create a [service principal](developer-glossary.md#service-principal-object), which represents a concrete instance of an application within a directory or tenant. The service principal defines what the app can actually do in a specific target directory, who can use it, what resources it has access to, and so on. The Microsoft identity platform creates a service principal from an application object through [consent](developer-glossary.md#consent).
+In the Microsoft identity platform, an [application object](developer-glossary.md#application-object) describes an application. At deployment time, the Microsoft identity platform uses the application object as a blueprint to create a [service principal](developer-glossary.md#service-principal-object), which represents a concrete instance of an application within a directory or tenant. The service principal defines what the app can actually do in a specific target directory, who can use it, what resources it has access to, and so on. The Microsoft identity platform creates a service principal from an application object through consent.
The following diagram shows a simplified Microsoft identity platform provisioning flow driven by consent. It shows two tenants: *A* and *B*.
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
Previously updated : 01/10/2022 Last updated : 04/13/2023
> > ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/> netcore-daemon-intro.svg) >
-> ### MSAL.NET
+> ### Microsoft.Identity.Web.MicrosoftGraph
>
-> Microsoft Authentication Library (MSAL, in the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) package) is the library that's used to sign in users and request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials).
+> Microsoft Identity Web (in the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) package) is the library that's used to request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials). Given the daemon app in this quickstart calls Microsoft Graph, you install tje [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) package, which handles automatically authenticated requests to Microsoft Graph (and references itself Microsoft.Identity.Web.TokenAcquisition)
>
-> MSAL.NET can be installed by running the following command in the Visual Studio Package Manager Console:
+> Microsoft.Identity.Web.MicrosoftGraph can be installed by running the following command in the Visual Studio Package Manager Console:
> > ```dotnetcli
-> dotnet add package Microsoft.Identity.Client
+> dotnet add package Microsoft.Identity.Web.MicrosoftGraph
> ``` >
-> ### MSAL initialization
+> ### Application initialization
>
-> Add the reference for MSAL by adding the following code:
+> Add the reference for Microsoft.Identity.Web by adding the following code:
> > ```csharp
-> using Microsoft.Identity.Client;
+> using Microsoft.Extensions.Configuration;
+> using Microsoft.Extensions.DependencyInjection;
+> using Microsoft.Graph;
+> using Microsoft.Identity.Abstractions;
+> using Microsoft.Identity.Web;
> ``` >
-> Then, initialize MSAL with the following:
+> Then, initialize the app with the following:
> > ```csharp
-> IConfidentialClientApplication app;
-> app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
-> .WithClientSecret(config.ClientSecret)
-> .WithAuthority(new Uri(config.Authority))
-> .Build();
+> // Get the Token acquirer factory instance. By default it reads an appsettings.json
+> // file if it exists in the same folder as the app (make sure that the
+> // "Copy to Output Directory" property of the appsettings.json file is "Copy if newer").
+> TokenAcquirerFactory tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance();
+>
+> // Configure the application options to be read from the configuration
+> // and add the services you need (Graph, token cache)
+> IServiceCollection services = tokenAcquirerFactory.Services;
+> services.AddMicrosoftGraph();
+> // By default, you get an in-memory token cache.
+> // For more token cache serialization options, see https://aka.ms/msal-net-token-cache-serialization
+>
+> // Resolve the dependency injection.
+> var serviceProvider = tokenAcquirerFactory.Build();
+> ```
+>
+> This code uses the configuration defined in the appsettings.json file:
+>
+> ```json
+> {
+> "AzureAd": {
+> "Instance": "https://login.microsoftonline.com/",
+> "TenantId": "[Enter here the tenantID or domain name for your Azure AD tenant]",
+> "ClientId": "[Enter here the ClientId for your application]",
+> "ClientCredentials": [
+> {
+> "SourceType": "ClientSecret",
+> "ClientSecret": "[Enter here a client secret for your application]"
+> }
+> ]
+> }
+> }
> ``` > > | Element | Description | > |||
-> | `config.ClientSecret` | The client secret created for the application in the Azure portal. |
-> | `config.ClientId` | The application (client) ID for the application registered in the Azure portal. This value can be found on the app's **Overview** page in the Azure portal. |
-> | `config.Authority` | (Optional) The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}` for the public cloud, where `{tenant}` is the name of the tenant or the tenant ID.|
+> | `ClientSecret` | The client secret created for the application in the Azure portal. |
+> | `ClientId` | The application (client) ID for the application registered in the Azure portal. This value can be found on the app's **Overview** page in the Azure portal. |
+> | `Instance` | (Optional) The security token service (STS) could instance endpoint for the app to authenticate. It's usually `https://login.microsoftonline.com/` for the public cloud.|
+> | `TenantId` | Name of the tenant or the tenant ID.|
>
-> For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.client.iconfidentialclientapplication).
+> For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.web.tokenacquirerfactory).
>
-> ### Requesting tokens
+> ### Calling Microsoft Graph
> > To request a token by using the app's identity, use the `AcquireTokenForClient` method: > > ```csharp
-> result = await app.AcquireTokenForClient(scopes)
-> .ExecuteAsync();
+> GraphServiceClient graphServiceClient = serviceProvider.GetRequiredService<GraphServiceClient>();
+> var users = await graphServiceClient.Users
+> .Request()
+> .WithAppOnly()
+> .GetAsync();
> ``` >
-> |Element| Description |
-> |||
-> | `scopes` | Contains the requested scopes. For confidential clients, this value should use a format similar to `{Application ID URI}/.default`. This format indicates that the requested scopes are the ones that are statically defined in the app object set in the Azure portal. For Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`. For custom web APIs, `{Application ID URI}` is defined in the Azure portal, under **Application Registration (Preview)** > **Expose an API**. |
->
-> For more information, see the [reference documentation for `AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.confidentialclientapplication.acquiretokenforclient).
->
> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] > > ## Next steps
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50143 | Session mismatch - Session is invalid because user tenant doesn't match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. | | AADSTS50144 | InvalidPasswordExpiredOnPremPassword - User's Active Directory password has expired. Generate a new password for the user or have the user use the self-service reset tool to reset their password. | | AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. Please contact the owner of the application. |
+| AADSTS501461 | AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier, or use an application-specific signing key. |
| AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter isn't valid. | | AADSTS501481 | The Code_Verifier doesn't match the code_challenge supplied in the authorization request.| | AADSTS501491 | InvalidCodeChallengeMethodInvalidSize - Invalid size of Code_Challenge parameter.|
The `error` field has several possible values - review the protocol documentatio
| AADSTS50194 | Application '{appId}'({appName}) isn't configured as a multi-tenant application. Usage of the /common endpoint isn't supported for such applications created after '{time}'. Use a tenant-specific endpoint or configure the application to be multi-tenant. | | AADSTS50196 | LoopDetected - A client loop has been detected. Check the appΓÇÖs logic to ensure that token caching is implemented, and that error conditions are handled correctly. The app has made too many of the same request in too short a period, indicating that it is in a faulty state or is abusively requesting tokens. | | AADSTS50197 | ConflictingIdentities - The user could not be found. Try signing in again. |
-| AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Because this is an "interaction_required" error, the client should do interactive auth. This occurs because a system webview has been used to request a token for a native application - the user must be prompted to ask if this was actually the app they meant to sign into. To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />chrome-extension:// (desktop Chrome browser only) |
+| AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Interrupt is shown for all scheme redirects in mobile browsers. <br />No action required. The user was asked to confirm that this app is the application they intended to sign into. <br />This is a security feature that helps prevent spoofing attacks. This occurs because a system webview has been used to request a token for a native application. <br />To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />chrome-extension:// (desktop Chrome browser only) |
| AADSTS51000 | RequiredFeatureNotEnabled - The feature is disabled. | | AADSTS51001 | DomainHintMustbePresent - Domain hint must be present with on-premises security identifier or on-premises UPN. | | AADSTS1000104| XCB2BResourceCloudNotAllowedOnIdentityTenant - Resource cloud {resourceCloud} isn't allowed on identity tenant {identityTenant}. {resourceCloud} - cloud instance which owns the resource. {identityTenant} - is the tenant where signing-in identity is originated from. |
active-directory Reference App Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-manifest.md
Previously updated : 05/19/2022 Last updated : 04/13/2023
Example:
"keyCredentials": [ { "customKeyIdentifier":null,
- "endDate":"2018-09-13T00:00:00Z",
+ "endDateTime":"2018-09-13T00:00:00Z",
"keyId":"<guid>",
- "startDate":"2017-09-12T00:00:00Z",
+ "startDateTime":"2017-09-12T00:00:00Z",
"type":"AsymmetricX509Cert", "usage":"Verify", "value":null
Example:
"passwordCredentials": [ { "customKeyIdentifier": null,
- "endDate": "2018-10-19T17:59:59.6521653Z",
+ "displayName": "Generated by App Service",
+ "endDateTime": "2022-10-19T17:59:59.6521653Z",
+ "hint": "Nsn",
"keyId": "<guid>",
- "startDate":"2016-10-19T17:59:59.6521653Z",
- "value":null
+ "secretText": null,
+ "startDateTime":"2022-10-19T17:59:59.6521653Z"
} ], ```
Use the following comments section to provide feedback that helps refine and sha
[IMPLICIT-GRANT]:v1-oauth2-implicit-grant-flow.md [INTEGRATING-APPLICATIONS-AAD]: ./quickstart-register-app.md [O365-PERM-DETAILS]: /graph/permissions-reference
-[RBAC-CLOUD-APPS-AZUREAD]: http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/
+[RBAC-CLOUD-APPS-AZUREAD]: http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/
active-directory Scenario Daemon Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-acquire-token.md
After you've constructed a confidential client application, you can acquire a to
The scope to request for a client credential flow is the name of the resource followed by `/.default`. This notation tells Azure Active Directory (Azure AD) to use the *application-level permissions* declared statically during application registration. Also, these API permissions must be granted by a tenant administrator.
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
-```csharp
-ResourceId = "someAppIDURI";
-var scopes = new [] { ResourceId+"/.default"};
+Here's an example of defining the scopes for the web API as part of the configuration in an [*appsettings.json*](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/2-Call-OwnApi/daemon-console/appsettings.json) file. This example is taken from the [.NET Core console daemon](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub.
+
+```json
+{
+ "AzureAd": {
+ // Same AzureAd section as before.
+ },
+
+ "MyWebApi": {
+ "BaseUrl": "https://localhost:44372/",
+ "RelativePath": "api/TodoList",
+ "RequestAppToken": true,
+ "Scopes": [ "[Enter here the scopes for your web API]" ]
+ }
+}
``` # [Java](#tab/java)
In MSAL Python, the configuration file looks like this code snippet:
} ```
+# [.NET (low level)](#tab/dotnet)
+
+```csharp
+ResourceId = "someAppIDURI";
+var scopes = new [] { ResourceId+"/.default"};
+```
+ ### Azure AD (v1.0) resources
The scope used for client credentials should always be the resource ID followed
## AcquireTokenForClient API
-To acquire a token for the app, you'll use `AcquireTokenForClient` or its equivalent, depending on the platform.
+To acquire a token for the app, use `AcquireTokenForClient` or its equivalent, depending on the platform.
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
+
+With Microsoft.Identity.Web, you don't need to acquire a token. You can use higher level APIs, as you see in [Calling a web API from a daemon application](scenario-daemon-call-api.md). If however you're using an SDK that requires a token, the following code snippet shows how to get this token.
```csharp
-using Microsoft.Identity.Client;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Identity.Abstractions;
+using Microsoft.Identity.Web;
-// With client credentials flows, the scope is always of the shape "resource/.default" because the
-// application permissions need to be set statically (in the portal or by PowerShell), and then granted by
-// a tenant administrator.
-string[] scopes = new string[] { "https://graph.microsoft.com/.default" };
+// In the Program.cs, acquire a token for your downstream API
-AuthenticationResult result = null;
-try
-{
- result = await app.AcquireTokenForClient(scopes)
- .ExecuteAsync();
-}
-catch (MsalUiRequiredException ex)
-{
- // The application doesn't have sufficient permissions.
- // - Did you declare enough app permissions during app creation?
- // - Did the tenant admin grant permissions to the application?
-}
-catch (MsalServiceException ex) when (ex.Message.Contains("AADSTS70011"))
-{
- // Invalid scope. The scope has to be in the form "https://resourceurl/.default"
- // Mitigation: Change the scope to be as expected.
-}
+var tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance();
+ITokenAcquirer acquirer = tokenAcquirerFactory.GetTokenAcquirer();
+AcquireTokenResult tokenResult = await acquirer.GetTokenForUserAsync(new[] { https://graph.microsoft.com/.default" });
+string accessToken = tokenResult.AccessToken;
```
-### AcquireTokenForClient uses the application token cache
-
-In MSAL.NET, `AcquireTokenForClient` uses the application token cache. (All the other AcquireToken*XX* methods use the user token cache.)
-Don't call `AcquireTokenSilent` before you call `AcquireTokenForClient`, because `AcquireTokenSilent` uses the *user* token cache. `AcquireTokenForClient` checks the *application* token cache itself and updates it.
- # [Java](#tab/java) This code is extracted from the [MSAL Java dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/tree/dev/msal4j-sdk/src/samples/confidential-client/).
private static IAuthenticationResult acquireToken() throws Exception {
# [Node.js](#tab/nodejs)
-The code snippet below illustrates token acquisition in an MSAL Node confidential client application:
+The following code snippet illustrates token acquisition in an MSAL Node confidential client application:
```JavaScript try {
else:
print(result.get("correlation_id")) # You might need this when reporting a bug. ```
+# [.NET (low level)](#tab/dotnet)
+
+```csharp
+using Microsoft.Identity.Client;
+
+// With client credentials flows, the scope is always of the shape "resource/.default" because the
+// application permissions need to be set statically (in the portal or by PowerShell), and then granted by
+// a tenant administrator.
+string[] scopes = new string[] { "https://graph.microsoft.com/.default" };
+
+AuthenticationResult result = null;
+try
+{
+ result = await app.AcquireTokenForClient(scopes)
+ .ExecuteAsync();
+}
+catch (MsalUiRequiredException ex)
+{
+ // The application doesn't have sufficient permissions.
+ // - Did you declare enough app permissions during app creation?
+ // - Did the tenant admin grant permissions to the application?
+}
+catch (MsalServiceException ex) when (ex.Message.Contains("AADSTS70011"))
+{
+ // Invalid scope. The scope has to be in the form "https://resourceurl/.default"
+ // Mitigation: Change the scope to be as expected.
+}
+```
+
+### AcquireTokenForClient uses the application token cache
+
+In MSAL.NET, `AcquireTokenForClient` uses the application token cache. (All the other AcquireToken*XX* methods use the user token cache.)
+Don't call `AcquireTokenSilent` before you call `AcquireTokenForClient`, because `AcquireTokenSilent` uses the *user* token cache. `AcquireTokenForClient` checks the *application* token cache itself and updates it.
+ ### Protocol
If your daemon app calls your own web API and you weren't able to add an app per
## Next steps
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
Move on to the next article in this scenario,
-[Calling a web API](./scenario-daemon-call-api.md?tabs=dotnet).
+[Calling a web API](./scenario-daemon-call-api.md?tabs=idweb).
# [Java](#tab/java)
Move on to the next article in this scenario,
Move on to the next article in this scenario, [Calling a web API](./scenario-daemon-call-api.md?tabs=python).
+# [.NET low level](#tab/dotnet)
+
+Move on to the next article in this scenario,
+[Calling a web API](./scenario-daemon-call-api.md?tabs=dotnet).
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md
The following Microsoft libraries support daemon apps:
## Configure the authority
-Daemon applications use application permissions rather than delegated permissions. So their supported account type can't be an account in any organizational directory or any personal Microsoft account (for example, Skype, Xbox, Outlook.com). There's no tenant admin to grant consent to a daemon application for a Microsoft personal account. You'll need to choose *accounts in my organization* or *accounts in any organization*.
+Daemon applications use application permissions rather than delegated permissions. So their supported account type can't be an account in any organizational directory or any personal Microsoft account (for example, Skype, Xbox, Outlook.com). There's no tenant admin to grant consent to a daemon application for a Microsoft personal account. You need to choose *accounts in my organization* or *accounts in any organization*.
The authority specified in the application configuration should be tenanted (specifying a tenant ID or a domain name associated with your organization).
-Even if you want to provide a multitenant tool, you should use a tenant ID or domain name, and **not** `common` or `organizations` with this flow, because the service cannot reliably infer which tenant should be used.
+Even if you want to provide a multitenant tool, you should use a tenant ID or domain name, and **not** `common` or `organizations` with this flow, because the service can't reliably infer which tenant should be used.
## Configure and instantiate the application
The configuration file defines:
- The client ID that you got from the application registration. - Either a client secret or a certificate.
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
-Here's an example of defining the configuration in an [*appsettings.json*](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/1-Call-MSGraph/daemon-console/appsettings.json) file. This example is taken from from the [.NET Core console daemon](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub.
+Here's an example of defining the configuration in an [*appsettings.json*](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/1-Call-MSGraph/daemon-console/appsettings.json) file. This example is taken from the [.NET Core console daemon](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub.
```json {
- "Instance": "https://login.microsoftonline.com/{0}",
- "Tenant": "[Enter here the tenantID or domain name for your Azure AD tenant]",
- "ClientId": "[Enter here the ClientId for your application]",
- "ClientSecret": "[Enter here a client secret for your application]",
- "CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]"
+ "AzureAd": {
+ "Instance": "https://login.microsoftonline.com/",
+ "TenantId": "[Enter here the tenantID or domain name for your Azure AD tenant]",
+ "ClientId": "[Enter here the ClientId for your application]",
+ "ClientCredentials": [
+ {
+ "SourceType": "ClientSecret",
+ "ClientSecret": "[Enter here a client secret for your application]"
+ }
+ ]
+ }
}+ ```
-You provide either a `ClientSecret` or a `CertificateName`. These settings are exclusive.
+You provide a certificate instead of the client secret, or [workload identity federation](/azure/active-directory/workload-identities/workload-identity-federation.md) credentials.
# [Java](#tab/java)
When you build a confidential client with certificates, the [parameters.json](ht
} ```
+# [.NET (low level) ](#tab/dotnet)
+
+Here's an example of defining the configuration in an [*appsettings.json*](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/1-Call-MSGraph/daemon-console/appsettings.json) file. This example is taken from the [.NET Core console daemon](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub.
+
+```json
+{
+ "Instance": "https://login.microsoftonline.com/{0}",
+ "Tenant": "[Enter here the tenantID or domain name for your Azure AD tenant]",
+ "ClientId": "[Enter here the ClientId for your application]",
+ "ClientSecret": "[Enter here a client secret for your application]",
+ "CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]"
+}
+```
+
+You provide either a `ClientSecret` or a `CertificateName`. These settings are exclusive.
+ ### Instantiate the MSAL application
The construction is different, depending on whether you're using client secrets
Reference the MSAL package in your application code.
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
-Add the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) NuGet package to your application, and then add a `using` directive in your code to reference it.
+Add the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) NuGet package to your application.
+Alternatively, if you want to call Microsoft Graph, add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) package.
+Your project could be as follows. The *appsettings.json* file needs to be copied to the output directory.
-In MSAL.NET, the confidential client application is represented by the `IConfidentialClientApplication` interface.
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+
+ <PropertyGroup>
+ <OutputType>Exe</OutputType>
+ <TargetFramework>net7.0</TargetFramework>
+ <RootNamespace>daemon_console</RootNamespace>
+ </PropertyGroup>
+
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Identity.Web.MicrosoftGraph" Version="2.6.1" />
+ </ItemGroup>
+
+ <ItemGroup>
+ <None Update="appsettings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
+
+In the Program.cs file, add a `using` directive in your code to reference Microsoft.Identity.Web.
```csharp
-using Microsoft.Identity.Client;
-IConfidentialClientApplication app;
+using Microsoft.Identity.Abstractions;
+using Microsoft.Identity.Web;
``` # [Java](#tab/java)
import com.microsoft.aad.msal4j.SilentParameters;
# [Node.js](#tab/nodejs)
-Simply install the packages by running `npm install` in the folder where *package.json* file resides. Then, import **msal-node** package:
+Install the packages by running `npm install` in the folder where *package.json* file resides. Then, import **msal-node** package:
```JavaScript const msal = require('@azure/msal-node');
import sys
import logging ``` -
+# [.NET (low level)](#tab/dotnet)
-#### Instantiate the confidential client application with a client secret
-
-Here's the code to instantiate the confidential client application with a client secret:
+Add the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) NuGet package to your application, and then add a `using` directive in your code to reference it.
-# [.NET](#tab/dotnet)
+In MSAL.NET, the confidential client application is represented by the `IConfidentialClientApplication` interface.
```csharp
-app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
- .WithClientSecret(config.ClientSecret)
- .WithAuthority(new Uri(config.Authority))
- .Build();
+using Microsoft.Identity.Client;
+IConfidentialClientApplication app;
```
-The `Authority` is a concatenation of the cloud instance and the tenant ID, for example `https://login.microsoftonline.com/contoso.onmicrosoft.com` or `https://login.microsoftonline.com/eb1ed152-0000-0000-0000-32401f3f9abd`. In the *appsettings.json* file shown in the [Configuration file](#configuration-file) section, these are represented by the `Instance` and `Tenant` values, respectively.
+
-In the code sample the previous snippet was taken from, `Authority` is a property on the [AuthenticationConfig](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/ffc4a9f5d9bdba5303e98a1af34232b434075ac7/1-Call-MSGraph/daemon-console/AuthenticationConfig.cs#L61-L70) class, and is defined as such:
+#### Instantiate the confidential client application with a client secret
+
+Here's the code to instantiate the confidential client application with a client secret:
+
+# [.NET](#tab/idweb)
```csharp
-/// <summary>
-/// URL of the authority
-/// </summary>
-public string Authority
-{
- get
+ class Program
{
- return String.Format(CultureInfo.InvariantCulture, Instance, Tenant);
+ static async Task Main(string[] _)
+ {
+ // Get the Token acquirer factory instance. By default it reads an appsettings.json
+ // file if it exists in the same folder as the app (make sure that the
+ // "Copy to Output Directory" property of the appsettings.json file is "Copy if newer").
+ TokenAcquirerFactory tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance();
+
+ // Configure the application options to be read from the configuration
+ // and add the services you need (Graph, token cache)
+ IServiceCollection services = tokenAcquirerFactory.Services;
+ services.AddMicrosoftGraph();
+ // By default, you get an in-memory token cache.
+ // For more token cache serialization options, see https://aka.ms/msal-net-token-cache-serialization
+
+ // Resolve the dependency injection.
+ var serviceProvider = tokenAcquirerFactory.Build();
+
+ // ...
+ }
}
-}
```
+The configuration is read from the *appsettings.json*:
+ # [Java](#tab/java) ```Java
app = msal.ConfidentialClientApplication(
) ```
+# [.NET (low level)](#tab/dotnet)
+
+```csharp
+app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
+ .WithClientSecret(config.ClientSecret)
+ .WithAuthority(new Uri(config.Authority))
+ .Build();
+```
+
+The `Authority` is a concatenation of the cloud instance and the tenant ID, for example `https://login.microsoftonline.com/contoso.onmicrosoft.com` or `https://login.microsoftonline.com/eb1ed152-0000-0000-0000-32401f3f9abd`. In the *appsettings.json* file shown in the [Configuration file](#configuration-file) section, instance and tenant are represented by the `Instance` and `Tenant` values, respectively.
+
+In the code sample the previous snippet was taken from, `Authority` is a property on the [AuthenticationConfig](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/ffc4a9f5d9bdba5303e98a1af34232b434075ac7/1-Call-MSGraph/daemon-console/AuthenticationConfig.cs#L61-L70) class, and is defined as such:
+
+```csharp
+/// <summary>
+/// URL of the authority
+/// </summary>
+public string Authority
+{
+ get
+ {
+ return String.Format(CultureInfo.InvariantCulture, Instance, Tenant);
+ }
+}
+```
+ #### Instantiate the confidential client application with a client certificate Here's the code to build an application with a certificate:
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
-```csharp
-X509Certificate2 certificate = ReadCertificate(config.CertificateName);
-app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
- .WithCertificate(certificate)
- .WithAuthority(new Uri(config.Authority))
- .Build();
+The code itself is exactly the same. The certificate is described in the configuration.
+There are many ways to get the certificate. For details see https://aka.ms/ms-id-web-certificates.
+Here's how you would do to get your certificate from KeyVault. Microsoft identity delegates to Azure Identity's DefaultAzureCredential, and used Managed identity when available to access the certificate from KeyVault. You can debug your application locally as it, then, uses your developer credentials.
+
+```json
+ "ClientCredentials": [
+ {
+ "SourceType": "KeyVault",
+ "KeyVaultUrl": "https://yourKeyVaultUrl.vault.azure.net",
+ "KeyVaultCertificateName": "NameOfYourCertificate"
+ }
``` # [Java](#tab/java)
app = msal.ConfidentialClientApplication(
) ``` --
-#### Advanced scenario: Instantiate the confidential client application with client assertions
- # [.NET](#tab/dotnet)
-Instead of a client secret or a certificate, the confidential client application can also prove its identity by using client assertions.
-
-MSAL.NET has two methods to provide signed assertions to the confidential client app:
--- `.WithClientAssertion()`-- `.WithClientClaims()`-
-When you use `WithClientAssertion`, provide a signed JWT. This advanced scenario is detailed in [Client assertions](msal-net-client-assertions.md).
- ```csharp
-string signedClientAssertion = ComputeAssertion();
+X509Certificate2 certificate = ReadCertificate(config.CertificateName);
app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
- .WithClientAssertion(signedClientAssertion)
- .Build();
+ .WithCertificate(certificate)
+ .WithAuthority(new Uri(config.Authority))
+ .Build();
```
-When you use `WithClientClaims`, MSAL.NET will produce a signed assertion that contains the claims expected by Azure AD, plus additional client claims that you want to send.
-This code shows how to do that:
+
-```csharp
-string ipAddress = "192.168.1.2";
-var claims = new Dictionary<string, string> { { "client_ip", ipAddress } };
-X509Certificate2 certificate = ReadCertificate(config.CertificateName);
-app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
- .WithAuthority(new Uri(config.Authority))
- .WithClientClaims(certificate, claims)
- .Build();
-```
+#### Advanced scenario: Instantiate the confidential client application with client assertions
-Again, for details, see [Client assertions](msal-net-client-assertions.md).
+# [.NET](#tab/idweb)
+
+Instead of a client secret or a certificate, the confidential client application can also prove its identity by using client assertions. See
+[CredentialDescription](/dotnet/api/microsoft.identity.abstractions.credentialdescription?view=msal-model-dotnet-latest) for details.
# [Java](#tab/java)
app = msal.ConfidentialClientApplication(
For details, see the MSAL Python reference documentation for [ConfidentialClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.ClientApplication.__init__).
+# [.NET (low level)](#tab/dotnet)
+
+Instead of a client secret or a certificate, the confidential client application can also prove its identity by using client assertions.
+
+MSAL.NET has two methods to provide signed assertions to the confidential client app:
+
+- `.WithClientAssertion()`
+- `.WithClientClaims()`
+
+When you use `WithClientAssertion`, provide a signed JWT. This advanced scenario is detailed in [Client assertions](msal-net-client-assertions.md).
+
+```csharp
+string signedClientAssertion = ComputeAssertion();
+app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
+ .WithClientAssertion(signedClientAssertion)
+ .Build();
+```
+
+When you use `WithClientClaims`, MSAL.NET produces a signed assertion that contains the claims expected by Azure AD, plus additional client claims that you want to send.
+This code shows how to do that:
+
+```csharp
+string ipAddress = "192.168.1.2";
+var claims = new Dictionary<string, string> { { "client_ip", ipAddress } };
+X509Certificate2 certificate = ReadCertificate(config.CertificateName);
+app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
+ .WithAuthority(new Uri(config.Authority))
+ .WithClientClaims(certificate, claims)
+ .Build();
+```
+
+Again, for details, see [Client assertions](msal-net-client-assertions.md).
+ ## Next steps
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
Move on to the next article in this scenario,
-[Acquire a token for the app](./scenario-daemon-acquire-token.md?tabs=dotnet).
+[Acquire a token for the app](./scenario-daemon-acquire-token.md?tabs=idweb).
# [Java](#tab/java)
Move on to the next article in this scenario,
Move on to the next article in this scenario, [Acquire a token for the app](./scenario-daemon-acquire-token.md?tabs=python).
+# [.NET (low level)](#tab/dotnet)
+
+Move on to the next article in this scenario,
+[Acquire a token for the app](./scenario-daemon-acquire-token.md?tabs=dotnet).
+
active-directory Scenario Daemon Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-call-api.md
# Daemon app that calls web APIs - call a web API from the app
-.NET daemon apps can call a web API. .NET daemon apps can also call several pre-approved web APIs.
+.NET daemon apps can call a web API. .NET daemon apps can also call several preapproved web APIs.
## Calling a web API from a daemon application Here's how to use the token to call an API:
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
+Microsoft.Identity.Web abstracts away the complexity of MSAL.NET. It provides you with higher-level APIs that handle the internals of MSAL.NET for you, such as processing Conditional Access errors, caching.
+
+Here's the Program.cs of the daemon app calling a downstream API:
+
+```csharp
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Identity.Abstractions;
+using Microsoft.Identity.Web;
+
+// In the Program.cs, acquire a token for your downstream API
+
+var tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance();
+tokenAcquirerFactory.Services.AddDownstreamApi("MyApi",
+ tokenAcquirerFactory.Configuration.GetSection("MyWebApi"));
+var sp = tokenAcquirerFactory.Build();
+
+var api = sp.GetRequiredService<IDownstreamApi>();
+var result = await api.GetForAppAsync<IEnumerable<TodoItem>>("MyApi");
+Console.WriteLine($"result = {result?.Count()}");
+```
+
+Here's the Program.cs of a daemon app that calls Microsoft Graph:
+
+```csharp
+var tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance();
+tokenAcquirerFactory.Services.AddMicrosoftGraph();
+var serviceProvider = tokenAcquirerFactory.Build();
+try
+{
+ GraphServiceClient graphServiceClient = serviceProvider.GetRequiredService<GraphServiceClient>();
+ var users = await graphServiceClient.Users
+ .Request()
+ .WithAppOnly()
+ .GetAsync();
+ Console.WriteLine($"{users.Count} users");
+ Console.ReadKey();
+}
+catch (Exception ex) { Console.WriteLine("We could not retrieve the user's list: " + $"{ex}"); }
+```
# [Java](#tab/java)
http_headers = {'Authorization': 'Bearer ' + result['access_token'],
data = requests.get(endpoint, headers=http_headers, stream=False).json() ```
+# [.NET low level](#tab/dotnet)
++ ## Calling several APIs
-For daemon apps, the web APIs that you call need to be pre-approved. There's no incremental consent with daemon apps. (There's no user interaction.) The tenant admin needs to provide consent in advance for the application and all the API permissions. If you want to call several APIs, acquire a token for each resource, each time calling `AcquireTokenForClient`. MSAL will use the application token cache to avoid unnecessary service calls.
+For daemon apps, the web APIs that you call need to be preapproved. There's no incremental consent with daemon apps. (There's no user interaction.) The tenant admin needs to provide consent in advance for the application and all the API permissions. If you want to call several APIs, acquire a token for each resource, each time calling `AcquireTokenForClient`. MSAL uses the application token cache to avoid unnecessary service calls.
## Next steps
-# [.NET](#tab/dotnet)
+# [.NET](#tab/idweb)
Move on to the next article in this scenario,
-[Move to production](./scenario-daemon-production.md?tabs=dotnet).
+[Move to production](./scenario-daemon-production.md?tabs=idweb).
# [Java](#tab/java)
Move on to the next article in this scenario,
Move on to the next article in this scenario, [Move to production](./scenario-daemon-production.md?tabs=python). -
+# [.NET low level](#tab/dotnet)
+
+Move on to the next article in this scenario,
+[Move to production](./scenario-daemon-production.md?tabs=dotnet).
++
active-directory Web App Quickstart Portal Node Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md
Last updated 04/12/2023
# Portal quickstart for React SPA
-> [!div renderon="portal" class="sxs-lookup"]
> In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure AD CIAM.
->
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
> ## Prerequisites > > * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) > * [Node.js](https://nodejs.org/en/download/) > * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor >
-> ## Download the code
->
-> > [!div class="nextstepaction"]
-> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/react-quickstart.zip)
->
> ## Run the sample > > 1. Unzip the downloaded file. >
-> 1. Locate the folder that contains the `package.json` file in your terminal, then run the following command:
+> 1. In your terminal, locate the folder that contains the `package.json` file, then run the following command:
> > ```console > npm install && npm start
Last updated 04/12/2023
> > 1. Open your browser and visit `http://locahost:3000`. >
-> 1. Select the **Sign-in** link on the navigation bar.
+> 1. Select the **Sign-in** link on the navigation bar, then follow the prompts.
>
active-directory Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md
Previously updated : 03/14/2022 Last updated : 04/06/2023 -++
+# Customer intent: As a tenant administrator, I want to bulk-invite external users to an organization from email addresses that I've stored in a .csv file.
# Azure Active Directory B2B collaboration code and PowerShell samples ## PowerShell example
-You can bulk-invite external users to an organization from email addresses that you've stored in a .CSV file.
+You can bulk-invite external users to an organization from email addresses that you've stored in a .csv file.
-1. Prepare the .CSV file
- Create a new CSV file and name it invitations.csv. In this example, the file is saved in C:\data, and contains the following information:
+1. Prepare the .csv file
+ Create a new .csv file and name it invitations.csv. In this example, the file is saved in C:\data, and contains the following information:
Name | InvitedUserEmailAddress | --
This cmdlet sends an invitation to the email addresses in invitations.csv. More
## Code sample
-The code sample illustrates how to call the invitation API and get the redemption URL. Use the redemption URL to send a custom invitation email. The email can be composed with an HTTP client, so you can customize how it looks and send it through the Microsoft Graph API.
+The code sample illustrates how to call the invitation API and get the redemption URL. Use the redemption URL to send a custom invitation email. You can compose the email with an HTTP client, so you can customize how it looks and send it through the Microsoft Graph API.
# [HTTP](#tab/http)
const inviteRedeemUrl = await sendInvite();
## Next steps -- [What is Azure AD B2B collaboration?](what-is-b2b.md)
+- [Samples for guest user self-service sign-up](code-samples-self-service-sign-up.md)
active-directory 3 Secure Access Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md
Generally, organizations customize policy, however consider the following parame
## Access control methods
-Some features, for example entitlement management, are available with an Azure AD Premium 2 (P2) license. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD P2 licenses. Learn more in the following entitlement management section.
+Some features, for example entitlement management, are available with an Azure AD Premium 2 (P2) license. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD Premium P2 licenses. Learn more in the following entitlement management section.
> [!NOTE]
-> Licenses are for one user. Therefore users, administrators, and business owners can have delegated access control. This scenario can occur with Azure AD P2 or Microsoft 365 E5, and you don't have to enable licenses for all users. The first 50,000 external users are free. If you don't enable P2 licenses for other internal users, they can't use entitlement management.
+> Licenses are for one user. Therefore users, administrators, and business owners can have delegated access control. This scenario can occur with Azure AD Premium P2 or Microsoft 365 E5, and you don't have to enable licenses for all users. The first 50,000 external users are free. If you don't enable P2 licenses for other internal users, they can't use entitlement management.
Other combinations of Microsoft 365, Office 365, and Azure AD have functionality to manage external users. See, [Microsoft 365 guidance for security & compliance](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance).
-## Govern access with Azure AD P2 and Microsoft 365 or Office 365 E5
+## Govern access with Azure AD Premium P2 and Microsoft 365 or Office 365 E5
-Azure AD P2 and Microsoft 365 E5 have all the security and governance tools.
+Azure AD Premium P2, included in Microsoft 365 E5, has additional security and governance capabilities.
### Provision, sign-in, review access, and deprovision access
Use entitlement management to provision and deprovision access to groups and tea
Learn more: [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md)
-## Governance with Azure AD P1, Microsoft 365, Office 365 E3
+## Manage access with Azure AD P1, Microsoft 365, Office 365 E3
### Provision, sign-in, review access, and deprovision access
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
Guest invite settings determine who invites guests and how guests are invited. T
* The IT team: * After training is complete, the IT team grants the Guest Inviter role
- * To enable access reviews, assigns Azure AD P2 license to the Microsoft 365 group owner
+ * Ensures there are sufficient Azure AD Premium P2 licenses for the Microsoft 365 group owners who will review
* Creates a Microsoft 365 group access review * Confirms access reviews occur * Removes users added to SharePoint
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
The following list describes features and services for productivity gains in hyb
* See, [B2B collaboration overview](../external-identities/what-is-b2b.md) * See, [Plan an Azure Active Directory B2B collaboration deployment](../fundamentals/secure-external-access-resources.md)
-## Governance and reporting
+## Identity Governance and reporting
-Use the following list to learn about governance and reporting. Items in the list refer to Microsoft Entra.
+Use the following list to learn about identity governance and reporting. Items in the list refer to Microsoft Entra.
Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https://www.microsoft.com/en-us/security/blog/?p=114039)
Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https:/
* See, [Plan a Microsoft Entra access reviews deployment](../governance/deploy-access-reviews.md) * **Identity governance** - Meet your compliance and risk management objectives for access to critical applications. Learn how to enforce accurate access. * See, [Govern access for applications in your environment](../governance/identity-governance-applications-prepare.md)
-
-Learn more: [Azure governance documentation](../../governance/index.yml)
## Best practices for a pilot
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
As you review your list, you may find you need to either assign an owner for tas
#### Owner recommended reading - [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)-- [Governance in Azure](../../governance/index.yml) ## Credentials management
Conditional Access is an essential tool for improving the security posture of yo
- Plan for [break glass](../roles/security-planning.md#break-glass-what-to-do-in-an-emergency) accounts without MFA controls - Ensure a consistent experience across Microsoft 365 client applications, for example, Teams, OneDrive, Outlook, etc.) by implementing the same set of controls for services such as Exchange Online and SharePoint Online - Assignment to policies should be implemented through groups, not individuals-- Do regular reviews of the exception groups used in policies to limit the time users are out of the security posture. If you own Azure AD P2, then you can use access reviews to automate the process
+- Do regular reviews of the exception groups used in policies to limit the time users are out of the security posture. If you own Azure AD Premium P2, then you can use access reviews to automate the process
#### Conditional Access recommended reading
active-directory Active Directory Ops Guide Govern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-govern.md
As you review your list, you may find you need to either assign an owner for tas
#### Owner recommended reading - [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)-- [Governance in Azure](../../governance/index.yml) ### Configuration changes testing
active-directory Active Directory Ops Guide Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-iam.md
As you review your list, you may find you need to either assign an owner for tas
#### Assigning owners recommended reading - [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)-- [Governance in Azure](../../governance/index.yml) ## On-premises identity synchronization
active-directory Active Directory Ops Guide Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-intro.md
This operations reference guide describes the checks and actions you should take
Some recommendations here might not be applicable to all customersΓÇÖ environment, for example, AD FS best practices might not apply if your organization uses password hash sync. > [!NOTE]
-> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their identity practices as Microsoft products and services evolve over time. Recommendations can change when organizations subscribe to a different Azure AD Premium license. For example, Azure AD Premium P2 will include more governance recommendations.
+> These recommendations are current as of the date of publishing but can change over time. Organizations should continuously evaluate their identity practices as Microsoft products and services evolve over time. Recommendations can change when organizations subscribe to a different Azure AD Premium license.
## Stakeholders
active-directory Active Directory Ops Guide Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-ops.md
As you review your list, you may find you need to either assign an owner for tas
#### Owners recommended reading - [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)-- [Governance in Azure](../../governance/index.yml) ## Hybrid management
active-directory Concept Secure Remote Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md
The following table is intended to highlight the key actions for the following l
The following table is intended to highlight the key actions for the following license subscriptions: -- Azure Active Directory Premium P2 (Azure AD P2)
+- Azure Active Directory Premium P2
- Enterprise Mobility + Security (EMS E5) - Microsoft 365 (E5, A5)
active-directory Whats Deprecated Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-deprecated-azure-ad.md
Use the following table to learn about changes including deprecations, retiremen
|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023| |[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| |[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|
-|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|On GA|
+|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Sometime after GA|
|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023| |[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023| |[Azure AD PowerShell and MSOnline PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023|
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Privileged Identity Management (PIM) administrators can now export all active an
For more information, see [View activity and audit history for Azure resource roles in PIM](../privileged-identity-management/azure-pim-resource-rbac.md). -
-## November/December 2018
-
-### Users removed from synchronization scope no longer switch to cloud-only accounts
-
-**Type:** Fixed
-**Service category:** User Management
-**Product capability:** Directory
-
->[!Important]
->We've heard and understand your frustration because of this fix. Therefore, we've reverted this change until such time that we can make the fix easier for you to implement in your organization.
-
-We've fixed a bug in which the DirSyncEnabled flag of a user would be erroneously switched to **False** when the Active Directory Domain Services (AD DS) object was excluded from synchronization scope and then moved to the Recycle Bin in Azure AD on the following sync cycle. As a result of this fix, if the user is excluded from sync scope and afterwards restored from Azure AD Recycle Bin, the user account remains as synchronized from on-premises AD, as expected, and cannot be managed in the cloud since its source of authority (SoA) remains as on-premises AD.
-
-Prior to this fix, there was an issue when the DirSyncEnabled flag was switched to False. It gave the wrong impression that these accounts were converted to cloud-only objects and that the accounts could be managed in the cloud. However, the accounts still retained their SoA as on-premises and all synchronized properties (shadow attributes) coming from on-premises AD. This condition caused multiple issues in Azure AD and other cloud workloads (like Exchange Online) that expected to treat these accounts as synchronized from AD but were now behaving like cloud-only accounts.
-
-At this time, the only way to truly convert a synchronized-from-AD account to cloud-only account is by disabling DirSync at the tenant level, which triggers a backend operation to transfer the SoA. This type of SoA change requires (but is not limited to) cleaning all the on-premises related attributes (such as LastDirSyncTime and shadow attributes) and sending a signal to other cloud workloads to have its respective object converted to a cloud-only account too.
-
-This fix consequently prevents direct updates on the ImmutableID attribute of a user synchronized from AD, which in some scenarios in the past were required. By design, the ImmutableID of an object in Azure AD, as the name implies, is meant to be immutable. New features implemented in Azure AD Connect Health and Azure AD Connect Synchronization client are available to address such scenarios:
--- **Large-scale ImmutableID update for many users in a staged approach**-
- For example, you need to do a lengthy AD DS inter-forest migration. Solution: Use Azure AD Connect to **Configure Source Anchor** and, as the user migrates, copy the existing ImmutableID values from Azure AD into the local AD DS user's ms-DS-Consistency-Guid attribute of the new forest. For more information, see [Using ms-DS-ConsistencyGuid as sourceAnchor](../hybrid/plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor).
--- **Large-scale ImmutableID updates for many users in one shot**-
- For example, while implementing Azure AD Connect you make a mistake, and now you need to change the SourceAnchor attribute. Solution: Disable DirSync at the tenant level and clear all the invalid ImmutableID values. For more information, see [Turn off directory synchronization for Office 365](/office365/enterprise/turn-off-directory-synchronization).
--- **Rematch on-premises user with an existing user in Azure AD**
- For example, a user that has been re-created in AD DS generates a duplicate in Azure AD account instead of rematching it with an existing Azure AD account (orphaned object). Solution: Use Azure AD Connect Health in the Azure portal to remap the Source Anchor/ImmutableID. For more information, see [Orphaned object scenario](../hybrid/how-to-connect-health-diagnose-sync-errors.md#orphaned-object-scenario).
-
-### Breaking Change: Updates to the audit and sign-in logs schema through Azure Monitor
-
-**Type:** Changed feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-We're currently publishing both the Audit and Sign-in log streams through Azure Monitor, so you can seamlessly integrate the log files with your SIEM tools or with Log Analytics. Based on your feedback, and in preparation for this feature's general availability announcement, we're making the following changes to our schema. These schema changes and its related documentation updates will happen by the first week of January.
-
-#### New fields in the Audit schema
-We're adding a new **Operation Type** field, to provide the type of operation performed on the resource. For example, **Add**, **Update**, or **Delete**.
-
-#### Changed fields in the Audit schema
-The following fields are changing in the Audit schema:
-
-|Field name|What changed|Old values|New Values|
-|-||-|-|
-|Category|This was the **Service Name** field. It's now the **Audit Categories** field. **Service Name** has been renamed to the **loggedByService** field.|<ul><li>Account Provisioning</li><li>Core Directory</li><li>Self-service Password Reset</li></ul>|<ul><li>User Management</li><li>Group Management</li><li>App Management</li></ul>|
-|targetResources|Includes **TargetResourceType** at the top level.|&nbsp;|<ul><li>Policy</li><li>App</li><li>User</li><li>Group</li></ul>|
-|loggedByService|Provides the name of the service that generated the audit log.|Null|<ul><li>Account Provisioning</li><li>Core Directory</li><li>Self-service password reset</li></ul>|
-|Result|Provides the result of the audit logs. Previously, this was enumerated, but we now show the actual value.|<ul><li>0</li><li>1</li></ul>|<ul><li>Success</li><li>Failure</li></ul>|
-
-#### Changed fields in the Sign-in schema
-The following fields are changing in the Sign-in schema:
-
-|Field name|What changed|Old values|New Values|
-|-||-|-|
-|appliedConditionalAccessPolicies|This was the **conditionalaccessPolicies** field. It's now the **appliedConditionalAccessPolicies** field.|No change|No change|
-|conditionalAccessStatus|Provides the result of the Conditional Access Policy Status at sign-in. Previously, this was enumerated, but we now show the actual value.|<ul><li>0</li><li>1</li><li>2</li><li>3</li></ul>|<ul><li>Success</li><li>Failure</li><li>Not Applied</li><li>Disabled</li></ul>|
-|appliedConditionalAccessPolicies: result|Provides the result of the individual Conditional Access Policy Status at sign-in. Previously, this was enumerated, but we now show the actual value.|<ul><li>0</li><li>1</li><li>2</li><li>3</li></ul>|<ul><li>Success</li><li>Failure</li><li>Not Applied</li><li>Disabled</li></ul>|
-
-For more information about the schema, see [Interpret the Azure AD audit logs schema in Azure Monitor (preview)](../reports-monitoring/overview-reports.md)
---
-### Identity Protection improvements to the supervised machine learning model and the risk score engine
-
-**Type:** Changed feature
-**Service category:** Identity Protection
-**Product capability:** Risk Scores
-
-Improvements to the Identity Protection-related user and sign-in risk assessment engine can help to improve user risk accuracy and coverage. Administrators may notice that user risk level is no longer directly linked to the risk level of specific detections, and that there's an increase in the number and level of risky sign-in events.
-
-Risk detections are now evaluated by the supervised machine learning model, which calculates user risk by using additional features of the user's sign-ins and a pattern of detections. Based on this model, the administrator might find users with high risk scores, even if detections associated with that user are of low or medium risk.
---
-### Administrators can reset their own password using the Microsoft Authenticator app (Public preview)
-
-**Type:** Changed feature
-**Service category:** Self Service Password Reset
-**Product capability:** User Authentication
-
-Azure AD administrators can now reset their own password using the Microsoft Authenticator app notifications or a code from any mobile authenticator app or hardware token. To reset their own password, administrators will now be able to use two of the following methods:
--- Microsoft Authenticator app notification--- Other mobile authenticator app / Hardware token code--- Email--- Phone call--- Text message-
-For more information about using the Microsoft Authenticator app to reset passwords, see [Azure AD self-service password reset - Mobile app and SSPR (Preview)](../authentication/concept-sspr-howitworks.md#mobile-app-and-sspr)
---
-### New Azure AD Cloud Device Administrator role (Public preview)
-
-**Type:** New feature
-**Service category:** Device Registration and Management
-**Product capability:** Access control
-
-Administrators can assign users to the new Cloud Device Administrator role to perform cloud device administrator tasks. Users assigned the Cloud Device Administrators role can enable, disable, and delete devices in Azure AD, along with being able to read Windows 10 BitLocker keys (if present) in the Azure portal.
-
-For more information about roles and permissions, see [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md)
---
-### Manage your devices using the new activity timestamp in Azure AD (Public preview)
-
-**Type:** New feature
-**Service category:** Device Registration and Management
-**Product capability:** Device Lifecycle Management
-
-We realize that over time you must refresh and retire your organizations' devices in Azure AD, to avoid having stale devices in your environment. To help with this process, Azure AD now updates your devices with a new activity timestamp, helping you to manage your device lifecycle.
-
-For more information about how to get and use this timestamp, see [How To: Manage the stale devices in Azure AD](../devices/manage-stale-devices.md)
---
-### Administrators can require users to accept a terms of use on each device
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Governance
-
-Administrators can now turn on the **Require users to consent on every device** option to require your users to accept your terms of use on every device they're using on your tenant.
-
-For more information, see the [Per-device terms of use section of the Azure Active Directory terms of use feature](../conditional-access/terms-of-use.md#per-device-terms-of-use).
---
-### Administrators can configure a terms of use to expire based on a recurring schedule
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Governance
--
-Administrators can now turn on the **Expire consents** option to make a terms of use expire for all of your users based on your specified recurring schedule. The schedule can be annually, bi-annually, quarterly, or monthly. After the terms of use expire, users must reaccept.
-
-For more information, see the [Add terms of use section of the Azure Active Directory terms of use feature](../conditional-access/terms-of-use.md#add-terms-of-use).
---
-### Administrators can configure a terms of use to expire based on each user's schedule
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Governance
-
-Administrators can now specify a duration that user must reaccept a terms of use. For example, administrators can specify that users must reaccept a terms of use every 90 days.
-
-For more information, see the [Add terms of use section of the Azure Active Directory terms of use feature](../conditional-access/terms-of-use.md#add-terms-of-use).
---
-### New Azure AD Privileged Identity Management (PIM) emails for Azure Active Directory roles
-
-**Type:** New feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-Customers using Azure AD Privileged Identity Management (PIM) can now receive a weekly digest email, including the following information for the last seven days:
--- Overview of the top eligible and permanent role assignments--- Number of users activating roles--- Number of users assigned to roles in PIM--- Number of users assigned to roles outside of PIM--- Number of users "made permanent" in PIM-
-For more information about PIM and the available email notifications, see [Email notifications in PIM](../privileged-identity-management/pim-email-notifications.md).
---
-### Group-based licensing is now generally available
-
-**Type:** Changed feature
-**Service category:** Other
-**Product capability:** Directory
-
-Group-based licensing is out of public preview and is now generally available. As part of this general release, we've made this feature more scalable and have added the ability to reprocess group-based licensing assignments for a single user and the ability to use group-based licensing with Office 365 E3/A3 licenses.
-
-For more information about group-based licensing, see [What is group-based licensing in Azure Active Directory?](./active-directory-licensing-whatis-azure-portal.md)
---
-### New Federated Apps available in Azure AD app gallery - November 2018
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In November 2018, we've added these 26 new apps with Federation support to the app gallery:
-
-[CoreStack](https://cloud.corestack.io/site/login), [HubSpot](../saas-apps/hubspot-tutorial.md), [GetThere](../saas-apps/getthere-tutorial.md), [Gra-Pe](../saas-apps/grape-tutorial.md), [eHour](https://getehour.com/try-now), [Consent2Go](../saas-apps/consent2go-tutorial.md), [Appinux](../saas-apps/appinux-tutorial.md), [DriveDollar](https://azuremarketplace.microsoft.com/marketplace/apps/savitas.drivedollar-azuread?tab=Overview), [Useall](../saas-apps/useall-tutorial.md), [Infinite Campus](../saas-apps/infinitecampus-tutorial.md), [Alaya](https://alayagood.com), [HeyBuddy](../saas-apps/heybuddy-tutorial.md), [Wrike SAML](../saas-apps/wrike-tutorial.md), [Drift](../saas-apps/drift-tutorial.md), [Zenegy for Business Central 365](https://accounting.zenegy.com/), [Everbridge Member Portal](../saas-apps/everbridge-tutorial.md), [Ivanti Service Manager (ISM)](../saas-apps/ivanti-service-manager-tutorial.md), [Peakon](../saas-apps/peakon-tutorial.md), [Allbound SSO](../saas-apps/allbound-sso-tutorial.md), [Plex Apps - Classic Test](https://test.plexonline.com/signon), [Plex Apps ΓÇô Classic](https://www.plexonline.com/signon), [Plex Apps - UX Test](https://test.cloud.plex.com/sso), [Plex Apps ΓÇô UX](https://cloud.plex.com/sso), [Plex Apps ΓÇô IAM](https://accounts.plex.com/)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-## October 2018
-
-### Azure AD Logs now work with Azure Log Analytics (Public preview)
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-We're excited to announce that you can now forward your Azure AD logs to Azure Log Analytics! This top-requested feature helps give you even better access to analytics for your business, operations, and security, as well as a way to help monitor your infrastructure. For more information, see the [Azure Active Directory Activity logs in Azure Log Analytics now available](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Azure-Active-Directory-Activity-logs-in-Azure-Log-Analytics-now/ba-p/274843) blog.
---
-### New Federated Apps available in Azure AD app gallery - October 2018
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In October 2018, we've added these 14 new apps with Federation support to the app gallery:
-
-[My Award Points](../saas-apps/myawardpoints-tutorial.md), [Vibe HCM](../saas-apps/vibehcm-tutorial.md), ambyint, [MyWorkDrive](../saas-apps/myworkdrive-tutorial.md), [BorrowBox](../saas-apps/borrowbox-tutorial.md), Dialpad, [ON24 Virtual Environment](../saas-apps/on24-tutorial.md), [RingCentral](../saas-apps/ringcentral-tutorial.md), [Zscaler Three](../saas-apps/zscaler-three-tutorial.md), [Phraseanet](../saas-apps/phraseanet-tutorial.md), [Appraisd](../saas-apps/appraisd-tutorial.md), [Workspot Control](../saas-apps/workspotcontrol-tutorial.md), [Shuccho Navi](../saas-apps/shucchonavi-tutorial.md), [Glassfrog](../saas-apps/glassfrog-tutorial.md)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Azure AD Domain Services Email Notifications
-
-**Type:** New feature
-**Service category:** Azure AD Domain Services
-**Product capability:** Azure AD Domain Services
-
-Azure AD Domain Services provides alerts on the Azure portal about misconfigurations or problems with your managed domain. These alerts include step-by-step guides so you can try to fix the problems without having to contact support.
-
-Starting in October, you'll be able to customize the notification settings for your managed domain so when new alerts occur, an email is sent to a designated group of people, eliminating the need to constantly check the portal for updates.
-
-For more information, see [Notification settings in Azure AD Domain Services](../../active-directory-domain-services/notifications.md).
---
-### Azure portal supports using the ForceDelete domain API to delete custom domains
-
-**Type:** Changed feature
-**Service category:** Directory Management
-**Product capability:** Directory
-
-We're pleased to announce that you can now use the ForceDelete domain API to delete your custom domain names by asynchronously renaming references, like users, groups, and apps from your custom domain name (contoso.com) back to the initial default domain name (contoso.onmicrosoft.com).
-
-This change helps you to more quickly delete your custom domain names if your organization no longer uses the name, or if you need to use the domain name with another Azure AD.
-
-For more information, see [Delete a custom domain name](../enterprise-users/domains-manage.md#delete-a-custom-domain-name).
---
-## September 2018
-
-### Updated administrator role permissions for dynamic groups
-
-**Type:** Fixed
-**Service category:** Group Management
-**Product capability:** Collaboration
-
-We've fixed an issue so specific administrator roles can now create and update dynamic membership rules, without needing to be the owner of the group.
-
-The roles are:
--- Global administrator--- Intune administrator--- User administrator-
-For more information, see [Create a dynamic group and check status](../enterprise-users/groups-create-rule.md)
---
-### Simplified Single Sign-On (SSO) configuration settings for some third-party apps
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-We realize that setting up Single Sign-On (SSO) for Software as a Service (SaaS) apps can be challenging due to the unique nature of each apps configuration. We've built a simplified configuration experience to auto-populate the SSO configuration settings for the following third-party SaaS apps:
--- Zendesk--- ArcGis Online--- Jamf Pro-
-To start using this one-click experience, go to the **Azure portal** > **SSO configuration** page for the app. For more information, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md)
---
-### Azure Active Directory - Where is your data located? page
-
-**Type:** New feature
-**Service category:** Other
-**Product capability:** GoLocal
-
-Select your company's region from the **Azure Active Directory - Where is your data located** page to view which Azure datacenter houses your Azure AD data at rest for all Azure AD services. You can filter the information by specific Azure AD services for your company's region.
-
-To access this feature and for more information, see [Azure Active Directory - Where is your data located](https://aka.ms/AADDataMap).
---
-### New deployment plan available for the My Apps Access panel
-
-**Type:** New feature
-**Service category:** My Apps
-**Product capability:** SSO
-
-Check out the new deployment plan that's available for the My Apps Access panel (https://aka.ms/deploymentplans).
-The My Apps Access panel provides users with a single place to find and access their apps. This portal also provides users with self-service opportunities, such as requesting access to apps and groups, or managing access to these resources on behalf of others.
-
-For more information, see [What is the My Apps portal?](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510)
---
-### New Troubleshooting and Support tab on the Sign-ins Logs page of the Azure portal
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-The new **Troubleshooting and Support** tab on the **Sign-ins** page of the Azure portal, is intended to help admins and support engineers troubleshoot issues related to Azure AD sign-ins. This new tab provides the error code, error message, and remediation recommendations (if any) to help solve the problem. If you're unable to resolve the problem, we also give you a new way to create a support ticket using the **Copy to clipboard** experience, which populates the **Request ID** and **Date (UTC)** fields for the log file in your support ticket.
-
-![Sign-in logs showing the new tab](media/whats-new/troubleshooting-and-support.png)
---
-### Enhanced support for custom extension properties used to create dynamic membership rules
-
-**Type:** Changed feature
-**Service category:** Group Management
-**Product capability:** Collaboration
-
-With this update, you can now select the **Get custom extension properties** link from the dynamic user group rule builder, enter your unique app ID, and receive the full list of custom extension properties to use when creating a dynamic membership rule for users. This list can also be refreshed to get any new custom extension properties for that app.
-
-For more information about using custom extension properties for dynamic membership rules, see [Extension properties and custom extension properties](../enterprise-users/groups-dynamic-membership.md#extension-properties-and-custom-extension-properties)
---
-### New approved client apps for Azure AD app-based Conditional Access
-
-**Type:** Plan for change
-**Service category:** Conditional Access
-**Product capability:** Identity security and protection
-
-The following apps are on the list of [approved client apps](../conditional-access/concept-conditional-access-conditions.md#client-apps):
--- Microsoft To-Do--- Microsoft Stream-
-For more information, see:
--- [Azure AD app-based Conditional Access](../conditional-access/app-based-conditional-access.md)---
-### New support for Self-Service Password Reset from the Windows 7/8/8.1 Lock screen
-
-**Type:** New feature
-**Service category:** SSPR
-**Product capability:** User Authentication
-
-After you set up this new feature, your users will see a link to reset their password from the **Lock** screen of a device running Windows 7, Windows 8, or Windows 8.1. By clicking that link, the user is guided through the same password reset flow as through the web browser.
-
-For more information, see [How to enable password reset from Windows 7, 8, and 8.1](../authentication/howto-sspr-windows.md)
---
-### Change notice: Authorization codes will no longer be available for reuse
-
-**Type:** Plan for change
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Starting on November 15, 2018, Azure AD will stop accepting previously used authentication codes for apps. This security change helps to bring Azure AD in line with the OAuth specification and will be enforced on both the v1 and v2 endpoints.
-
-If your app reuses authorization codes to get tokens for multiple resources, we recommend that you use the code to get a refresh token, and then use that refresh token to acquire additional tokens for other resources. Authorization codes can only be used once, but refresh tokens can be used multiple times across multiple resources. An app that attempts to reuse an authentication code during the OAuth code flow will get an invalid_grant error.
-
-For this and other protocols-related changes, see [the full list of what's new for authentication](../develop/reference-breaking-changes.md).
---
-### New Federated Apps available in Azure AD app gallery - September 2018
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In September 2018, we've added these 16 new apps with Federation support to the app gallery:
-
-[Uberflip](../saas-apps/uberflip-tutorial.md), [Comeet Recruiting Software](../saas-apps/comeetrecruitingsoftware-tutorial.md), [Workteam](../saas-apps/workteam-tutorial.md), [ArcGIS Enterprise](../saas-apps/arcgisenterprise-tutorial.md), [Nuclino](../saas-apps/nuclino-tutorial.md), [JDA Cloud](../saas-apps/jdacloud-tutorial.md), [Snowflake](../saas-apps/snowflake-tutorial.md), NavigoCloud, [Figma](../saas-apps/figma-tutorial.md), join.me, [ZephyrSSO](../saas-apps/zephyrsso-tutorial.md), [Silverback](../saas-apps/silverback-tutorial.md), Riverbed Xirrus EasyPass, [Rackspace SSO](../saas-apps/rackspacesso-tutorial.md), Enlyft SSO for Azure, SurveyMonkey, [Convene](../saas-apps/convene-tutorial.md), [dmarcian](../saas-apps/dmarcian-tutorial.md)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Support for additional claims transformations methods
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-We've introduced new claim transformation methods, ToLower() and ToUpper(), which can be applied to SAML tokens from the SAML-based **Single Sign-On Configuration** page.
-
-For more information, see [How to customize claims issued in the SAML token for enterprise applications in Azure AD](../develop/active-directory-saml-claims-customization.md)
---
-### Updated SAML-based app configuration UI (preview)
-
-**Type:** Changed feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-As part of our updated SAML-based app configuration UI, you'll get:
--- An updated walkthrough experience for configuring your SAML-based apps.--- More visibility about what's missing or incorrect in your configuration.--- The ability to add multiple email addresses for expiration certificate notification.--- New claim transformation methods, ToLower() and ToUpper(), and more.--- A way to upload your own token signing certificate for your enterprise apps.--- A way to set the NameID Format for SAML apps, and a way to set the NameID value as Directory Extensions.-
-To turn on this updated view, click the **Try out our new experience** link from the top of the **Single Sign-On** page. For more information, see [Tutorial: Configure SAML-based single sign-on for an application with Azure Active Directory](../manage-apps/view-applications-portal.md).
---
-## August 2018
-
-### Changes to Azure Active Directory IP address ranges
-
-**Type:** Plan for change
-**Service category:** Other
-**Product capability:** Platform
-
-We're introducing larger IP ranges to Azure AD, which means if you've configured Azure AD IP address ranges for your firewalls, routers, or Network Security Groups, you'll need to update them. We're making this update so you won't have to change your firewall, router, or Network Security Groups IP range configurations again when Azure AD adds new endpoints.
-
-Network traffic is moving to these new ranges over the next two months. To continue with uninterrupted service, you must add these updated values to your IP Addresses before September 10, 2018:
--- 20.190.128.0/18--- 40.126.0.0/18-
-We strongly recommend not removing the old IP Address ranges until all of your network traffic has moved to the new ranges. For updates about the move and to learn when you can remove the old ranges, see [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2).
---
-### Change notice: Authorization codes will no longer be available for reuse
-
-**Type:** Plan for change
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Starting on November 15, 2018, Azure AD will stop accepting previously used authentication codes for apps. This security change helps to bring Azure AD in line with the OAuth specification and will be enforced on both the v1 and v2 endpoints.
-
-If your app reuses authorization codes to get tokens for multiple resources, we recommend that you use the code to get a refresh token, and then use that refresh token to acquire additional tokens for other resources. Authorization codes can only be used once, but refresh tokens can be used multiple times across multiple resources. An app that attempts to reuse an authentication code during the OAuth code flow will get an invalid_grant error.
-
-For this and other protocols-related changes, see [the full list of what's new for authentication](../develop/reference-breaking-changes.md).
---
-### Converged security info management for self-service password (SSPR) and multifactor authentication (MFA)
-
-**Type:** New feature
-**Service category:** SSPR
-**Product capability:** User Authentication
-
-This new feature helps people manage their security info (such as, phone number, mobile app, and so on) for SSPR and multifactor authentication (MFA) in a single location and experience; as compared to previously, where it was done in two different locations.
-
-This converged experience also works for people using either SSPR or multifactor authentication (MFA). Additionally, if your organization doesn't enforce multifactor authentication (MFA) or SSPR registration, people can still register any multifactor authentication (MFA) or SSPR security info methods allowed by your organization from the My Apps portal.
-
-This is an opt-in public preview. Administrators can turn on the new experience (if desired) for a selected group or for all users in a tenant. For more information about the converged experience, see the [Converged experience blog](https://cloudblogs.microsoft.com/enterprisemobility/2018/08/06/mfa-and-sspr-updates-now-in-public-preview/)
---
-### New HTTP-Only cookies setting in Azure AD Application proxy apps
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-There's a new setting called, **HTTP-Only Cookies** in your Application Proxy apps. This setting helps provide extra security by including the HTTPOnly flag in the HTTP response header for both Application Proxy access and session cookies, stopping access to the cookie from a client-side script and further preventing actions like copying or modifying the cookie. Although this flag hasn't been used previously, your cookies have always been encrypted and transmitted using a TLS connection to help protect against improper modifications.
-
-This setting isn't compatible with apps using ActiveX controls, such as Remote Desktop. If you're in this situation, we recommend that you turn off this setting.
-
-For more information about the HTTP-Only Cookies setting, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md).
---
-### Privileged Identity Management (PIM) for Azure resources supports Management Group resource types
-
-**Type:** New feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-Just-In-Time activation and assignment settings can now be applied to Management Group resource types, just like you already do for Subscriptions, Resource Groups, and Resources (such as VMs, App Services, and more). In addition, anyone with a role that provides administrator access for a Management Group can discover and manage that resource in PIM.
-
-For more information about PIM and Azure resources, see [Discover and manage Azure resources by using Privileged Identity Management](../privileged-identity-management/pim-resource-roles-discover-resources.md)
---
-### Application access (preview) provides faster access to the Azure portal
-
-**Type:** New feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-Today, when activating a role using PIM, it can take over 10 minutes for the permissions to take effect. If you choose to use Application access, which is currently in public preview, administrators can access the Azure portal as soon as the activation request completes.
-
-Currently, Application access only supports the Azure portal experience and Azure resources. For more information about PIM and Application access, see [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)
---
-### New Federated Apps available in Azure AD app gallery - August 2018
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In August 2018, we've added these 16 new apps with Federation support to the app gallery:
-
-[Hornbill](../saas-apps/hornbill-tutorial.md), [Bridgeline Unbound](../saas-apps/bridgelineunbound-tutorial.md), [Sauce Labs - Mobile and Web Testing](../saas-apps/saucelabs-mobileandwebtesting-tutorial.md), [Meta Networks Connector](../saas-apps/metanetworksconnector-tutorial.md), [Way We Do](../saas-apps/waywedo-tutorial.md), [Spotinst](../saas-apps/spotinst-tutorial.md), [ProMaster (by Inlogik)](../saas-apps/promaster-tutorial.md), SchoolBooking, [4me](../saas-apps/4me-tutorial.md), [Dossier](../saas-apps/dossier-tutorial.md), [N2F - Expense reports](../saas-apps/n2f-expensereports-tutorial.md), [Comm100 Live Chat](../saas-apps/comm100livechat-tutorial.md), [SafeConnect](../saas-apps/safeconnect-tutorial.md), [ZenQMS](../saas-apps/zenqms-tutorial.md), [eLuminate](../saas-apps/eluminate-tutorial.md), [Dovetale](../saas-apps/dovetale-tutorial.md).
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Native Tableau support is now available in Azure AD Application Proxy
-
-**Type:** Changed feature
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-With our update from the OpenID Connect to the OAuth 2.0 Code Grant protocol for our pre-authentication protocol, you no longer have to do any additional configuration to use Tableau with Application Proxy. This protocol change also helps Application Proxy better support more modern apps by using only HTTP redirects, which are commonly supported in JavaScript and HTML tags.
---
-### New support to add Google as an identity provider for B2B guest users in Azure Active Directory (preview)
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-By setting up federation with Google in your organization, you can let invited Gmail users sign in to your shared apps and resources using their existing Google account, without having to create a personal Microsoft Account (MSAs) or an Azure AD account.
-
-This is an opt-in public preview. For more information about Google federation, see [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md).
---
-## July 2018
-
-### Improvements to Azure Active Directory email notifications
-
-**Type:** Changed feature
-**Service category:** Other
-**Product capability:** Identity lifecycle management
-
-Azure Active Directory (Azure AD) emails now feature an updated design, as well as changes to the sender email address and sender display name, when sent from the following
--- Azure AD Access Reviews-- Azure AD Connect Health-- Azure AD Identity Protection-- Azure AD Privileged Identity Management-- Enterprise App Expiring Certificate Notifications-- Enterprise App Provisioning Service Notifications-
-The email notifications will be sent from the following email address and display name:
--- Email address: azure-noreply@microsoft.com-- Display name: Microsoft Azure-
-For an example of some of the new e-mail designs and more information, see [Email notifications in Azure AD PIM](../privileged-identity-management/pim-email-notifications.md).
---
-### Azure AD Activity Logs are now available through Azure Monitor
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-The Azure AD Activity Logs are now available in public preview for the Azure Monitor (Azure's platform-wide monitoring service). Azure Monitor offers you long-term retention and seamless integration, in addition to these improvements:
--- Long-term retention by routing your log files to your own Azure storage account.--- Seamless SIEM integration, without requiring you to write or maintain custom scripts.--- Seamless integration with your own custom solutions, analytics tools, or incident management solutions.-
-For more information about these new capabilities, see our blog [Azure AD activity logs in Azure Monitor diagnostics is now in public preview](https://cloudblogs.microsoft.com/enterprisemobility/2018/07/26/azure-ad-activity-logs-in-azure-monitor-diagnostics-now-in-public-preview/) and our documentation, [Azure Active Directory activity logs in Azure Monitor (preview)](../reports-monitoring/concept-activity-logs-azure-monitor.md).
---
-### Conditional Access information added to the Azure AD sign-ins report
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Identity Security & Protection
-
-This update lets you see which policies are evaluated when a user signs in along with the policy outcome. In addition, the report now includes the type of client app used by the user, so you can identify legacy protocol traffic. Report entries can also now be searched for a correlation ID, which can be found in the user-facing error message and can be used to identify and troubleshoot the matching sign-in request.
---
-### View legacy authentications through Sign-ins activity logs
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-With the introduction of the **Client App** field in the Sign-in activity logs, customers can now see users that are using legacy authentications. Customers will be able to access this information using the Sign-ins Microsoft Graph API or through the Sign-in activity logs in Azure portal where you can use the **Client App** control to filter on legacy authentications. Check out the documentation for more details.
---
-### New Federated Apps available in Azure AD app gallery - July 2018
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In July 2018, we've added these 16 new apps with Federation support to the app gallery:
-
-[Innovation Hub](../saas-apps/innovationhub-tutorial.md), [Leapsome](../saas-apps/leapsome-tutorial.md), [Certain Admin SSO](../saas-apps/certainadminsso-tutorial.md), PSUC Staging, [iPass SmartConnect](../saas-apps/ipasssmartconnect-tutorial.md), [Screencast-O-Matic](../saas-apps/screencast-tutorial.md), PowerSchool Unified Classroom, [Eli Onboarding](../saas-apps/elionboarding-tutorial.md), [Bomgar Remote Support](../saas-apps/bomgarremotesupport-tutorial.md), [Nimblex](../saas-apps/nimblex-tutorial.md), [Imagineer WebVision](../saas-apps/imagineerwebvision-tutorial.md), [Insight4GRC](../saas-apps/insight4grc-tutorial.md), [SecureW2 JoinNow Connector](../saas-apps/securejoinnow-tutorial.md), [Kanbanize](../saas-apps/kanbanize-tutorial.md), [SmartLPA](../saas-apps/smartlpa-tutorial.md), [Skills Base](../saas-apps/skillsbase-tutorial.md)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### New user provisioning SaaS app integrations - July 2018
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-Azure AD allows you to automate the creation, maintenance, and removal of user identities in SaaS applications such as Dropbox, Salesforce, ServiceNow, and more. For July 2018, we have added user provisioning support for the following applications in the Azure AD app gallery:
--- [Cisco WebEx](../saas-apps/cisco-webex-provisioning-tutorial.md)--- [Bonusly](../saas-apps/bonusly-provisioning-tutorial.md)-
-For a list of all applications that support user provisioning in the Azure AD gallery, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
---
-### Connect Health for Sync - An easier way to fix orphaned and duplicate attribute sync errors
-
-**Type:** New feature
-**Service category:** AD Connect
-**Product capability:** Monitoring & Reporting
-
-Azure AD Connect Health introduces self-service remediation to help you highlight and fix sync errors. This feature troubleshoots duplicated attribute sync errors and fixes objects that are orphaned from Azure AD. This diagnosis has the following benefits:
--- Narrows down duplicated attribute sync errors, providing specific fixes--- Applies a fix for dedicated Azure AD scenarios, resolving errors in a single step--- No upgrade or configuration is required to turn on and use this feature-
-For more information, see [Diagnose and remediate duplicated attribute sync errors](../hybrid/how-to-connect-health-diagnose-sync-errors.md)
---
-### Visual updates to the Azure AD and MSA sign-in experiences
-
-**Type:** Changed feature
-**Service category:** Azure AD
-**Product capability:** User Authentication
-
-We've updated the UI for Microsoft's online services sign-in experience, such as for Office 365 and Azure. This change makes the screens less cluttered and more straightforward. For more information about this change, see the [Upcoming improvements to the Azure AD sign-in experience](https://cloudblogs.microsoft.com/enterprisemobility/2018/04/04/upcoming-improvements-to-the-azure-ad-sign-in-experience/) blog.
---
-### New release of Azure AD Connect - July 2018
-
-**Type:** Changed feature
-**Service category:** App Provisioning
-**Product capability:** Identity Lifecycle Management
-
-The latest release of Azure AD Connect includes:
--- Bug fixes and supportability updates--- General Availability of the Ping-Federate integration--- Updates to the latest SQL 2012 client-
-For more information about this update, see [Azure AD Connect: Version release history](../hybrid/reference-connect-version-history.md)
---
-### Updates to the terms of use end-user UI
-
-**Type:** Changed feature
-**Service category:** Terms of use
-**Product capability:** Governance
-
-We're updating the acceptance string in the TOU end-user UI.
-
-**Current text.** In order to access [tenantName] resources, you must accept the terms of use.<br>**New text.** In order to access [tenantName] resource, you must read the terms of use.
-
-**Current text:** Choosing to accept means that you agree to all of the above terms of use.<br>**New text:** Please select Accept to confirm that you have read and understood the terms of use.
---
-### Pass-through Authentication supports legacy protocols and applications
-
-**Type:** Changed feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Pass-through Authentication now supports legacy protocols and apps. The following limitations are now fully supported:
--- User sign-ins to legacy Office client applications, Office 2010 and Office 2013, without requiring modern authentication.--- Access to calendar sharing and free/busy information in Exchange hybrid environments on Office 2010 only.--- User sign-ins to Skype for Business client applications without requiring modern authentication.--- User sign-ins to PowerShell version 1.0.--- The Apple Device Enrollment Program (Apple DEP), using the iOS Setup Assistant.---
-### Converged security info management for self-service password reset and MultiFactor Authentication
-
-**Type:** New feature
-**Service category:** SSPR
-**Product capability:** User Authentication
-
-This new feature lets users manage their security info (for example, phone number, email address, mobile app, and so on) for self-service password reset (SSPR) and multifactor authentication (MFA) in a single experience. Users will no longer have to register the same security info for SSPR and multifactor authentication (MFA) in two different experiences. This new experience also applies to users who have either SSPR or multifactor authentication (MFA).
-
-If an organization isn't enforcing multifactor authentication (MFA) or SSPR registration, users can register their security info through the **My Apps** portal. From there, users can register any methods enabled for multifactor authentication (MFA) or SSPR.
-
-This is an opt-in public preview. Admins can turn on the new experience (if desired) for a selected group of users or all users in a tenant.
---
-### Use the Microsoft Authenticator app to verify your identity when you reset your password
-
-**Type:** Changed feature
-**Service category:** SSPR
-**Product capability:** User Authentication
-
-This feature lets non-admins verify their identity while resetting a password using a notification or code from Microsoft Authenticator (or any other authenticator app). After admins turn on this self-service password reset method, users who have registered a mobile app through aka.ms/mfasetup or aka.ms/setupsecurityinfo can use their mobile app as a verification method while resetting their password.
-
-Mobile app notification can only be turned on as part of a policy that requires two methods to reset your password.
---
-## June 2018
-
-### Change notice: Security fix to the delegated authorization flow for apps using Azure AD Activity Logs API
-
-**Type:** Plan for change
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-Due to our stronger security enforcement, we've had to make a change to the permissions for apps that use a delegated authorization flow to access [Azure AD Activity Logs APIs](../reports-monitoring/concept-reporting-api.md). This change will occur by **June 26, 2018**.
-
-If any of your apps use Azure AD Activity Log APIs, follow these steps to ensure the app doesn't break after the change happens.
-
-**To update your app permissions**
-
-1. Sign in to the Azure portal, select **Azure Active Directory**, and then select **App Registrations**.
-2. Select your app that uses the Azure AD Activity Logs API, select **Settings**, select **Required permissions**, and then select the **Windows Azure Active Directory** API.
-3. In the **Delegated permissions** area of the **Enable access** blade, select the box next to **Read directory** data, and then select **Save**.
-4. Select **Grant permissions**, and then select **Yes**.
-
- >[!Note]
- >You must be a Global administrator to grant permissions to the app.
-
-For more information, see the [Grant permissions](../reports-monitoring/howto-configure-prerequisites-for-reporting-api.md#grant-permissions) area of the Prerequisites to access the Azure AD reporting API article.
---
-### Configure TLS settings to connect to Azure AD services for PCI DSS compliance
-
-**Type:** New feature
-**Service category:** N/A
-**Product capability:** Platform
-
-Transport Layer Security (TLS) is a protocol that provides privacy and data integrity between two communicating applications and is the most widely deployed security protocol used today.
-
-The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has determined that early versions of TLS and Secure Sockets Layer (SSL) must be disabled in favor of enabling new and more secure app protocols, with compliance starting on **June 30, 2018**. This change means that if you connect to Azure AD services and require PCI DSS-compliance, you must disable TLS 1.0. Multiple versions of TLS are available, but TLS 1.2 is the latest version available for Azure Active Directory Services. We highly recommend moving directly to TLS 1.2 for both client/server and browser/server combinations.
-
-Out-of-date browsers might not support newer TLS versions, such as TLS 1.2. To see which versions of TLS are supported by your browser, go to the [Qualys SSL Labs](https://www.ssllabs.com/) site and select **Test your browser**. We recommend you upgrade to the latest version of your web browser and preferably enable only TLS 1.2.
-
-**To enable TLS 1.2, by browser**
--- **Microsoft Edge and Internet Explorer (both are set using Internet Explorer)**-
- 1. Open Internet Explorer, select **Tools** > **Internet Options** > **Advanced**.
- 2. In the **Security** area, select **use TLS 1.2**, and then select **OK**.
- 3. Close all browser windows and restart Internet Explorer.
--- **Google Chrome**-
- 1. Open Google Chrome, type *chrome://settings/* into the address bar, and press **Enter**.
- 2. Expand the **Advanced** options, go to the **System** area, and select **Open proxy settings**.
- 3. In the **Internet Properties** box, select the **Advanced** tab, go to the **Security** area, select **use TLS 1.2**, and then select **OK**.
- 4. Close all browser windows and restart Google Chrome.
--- **Mozilla Firefox**-
- 1. Open Firefox, type *about:config* into the address bar, and then press **Enter**.
- 2. Search for the term, *TLS*, and then select the **security.tls.version.max** entry.
- 3. Set the value to **3** to force the browser to use up to version TLS 1.2, and then select **OK**.
-
- >[!NOTE]
- >Firefox version 60.0 supports TLS 1.3, so you can also set the security.tls.version.max value to **4**.
-
- 4. Close all browser windows and restart Mozilla Firefox.
---
-### New Federated Apps available in Azure AD app gallery - June 2018
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In June 2018, we've added these 15 new apps with Federation support to the app gallery:
-
-[Skytap](../saas-apps/skytap-tutorial.md), [Settling music](../saas-apps/settlingmusic-tutorial.md), [SAML 1.1 Token enabled LOB App](../saas-apps/saml-tutorial.md), [Supermood](../saas-apps/supermood-tutorial.md), [Autotask](../saas-apps/autotaskendpointbackup-tutorial.md), [Endpoint Backup](../saas-apps/autotaskendpointbackup-tutorial.md), [Skyhigh Networks](../saas-apps/skyhighnetworks-tutorial.md), Smartway2, [TonicDM](../saas-apps/tonicdm-tutorial.md), [Moconavi](../saas-apps/moconavi-tutorial.md), [Zoho One](../saas-apps/zohoone-tutorial.md), [SharePoint on-premises](../saas-apps/sharepoint-on-premises-tutorial.md), [ForeSee CX Suite](../saas-apps/foreseecxsuite-tutorial.md), [Vidyard](../saas-apps/vidyard-tutorial.md), [ChronicX](../saas-apps/chronicx-tutorial.md)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Azure AD Password Protection is available in public preview
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** User Authentication
-
-Use Azure AD Password Protection to help eliminate easily guessed passwords from your environment. Eliminating these passwords helps to lower the risk of compromise from a password spray type of attack.
-
-Specifically, Azure AD Password Protection helps you:
--- Protect your organization's accounts in both Azure AD and Windows Server Active Directory (AD).-- Stops your users from using passwords on a list of more than 500 of the most commonly used passwords, and over 1 million character substitution variations of those passwords.-- Administer Azure AD Password Protection from a single location in the Azure portal, for both Azure AD and on-premises Windows Server AD.-
-For more information about Azure AD Password Protection, see [Eliminate bad passwords in your organization](../authentication/concept-password-ban-bad.md).
---
-### New "all guests" Conditional Access policy template created during terms of use creation
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Governance
-
-During the creation of your terms of use, a new Conditional Access policy template is also created for "all guests" and "all apps". This new policy template applies the newly created ToU, streamlining the creation and enforcement process for guests.
-
-For more information, see [Azure Active Directory Terms of use feature](../conditional-access/terms-of-use.md).
---
-### New "custom" Conditional Access policy template created during terms of use creation
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Governance
-
-During the creation of your terms of use, a new "custom" Conditional Access policy template is also created. This new policy template lets you create the ToU and then immediately go to the Conditional Access policy creation blade, without needing to manually navigate through the portal.
-
-For more information, see [Azure Active Directory Terms of use feature](../conditional-access/terms-of-use.md).
---
-### New and comprehensive guidance about deploying Azure AD Multi-Factor Authentication
-
-**Type:** New feature
-**Service category:** Other
-**Product capability:** Identity Security & Protection
-
-We've released new step-by-step guidance about how to deploy Azure AD Multi-Factor Authentication (MFA) in your organization.
-
-To view the Azure AD Multi-Factor Authentication (MFA) deployment guide, go to the [Identity Deployment Guides](./active-directory-deployment-plans.md) repo on GitHub. To provide feedback about the deployment guides, use the [Deployment Plan Feedback form](https://aka.ms/deploymentplanfeedback). If you have any questions about the deployment guides, contact us at [IDGitDeploy](mailto:idgitdeploy@microsoft.com).
---
-### Azure AD delegated app management roles are in public preview
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Access Control
-
-Admins can now delegate app management tasks without assigning the Global Administrator role. The new roles and capabilities are:
--- **New standard Azure AD admin roles:**-
- - **Application Administrator.** Grants the ability to manage all aspects of all apps, including registration, SSO settings, app assignments and licensing, App proxy settings, and consent (except to Azure AD resources).
-
- - **Cloud Application Administrator.** Grants all of the Application Administrator abilities, except for App proxy because it doesn't provide on-premises access.
-
- - **Application Developer.** Grants the ability to create app registrations, even if the **allow users to register apps** option is turned off.
--- **Ownership (set up per-app registration and per-enterprise app, similar to the group ownership process:**-
- - **App Registration Owner.** Grants the ability to manage all aspects of owned app registration, including the app manifest and adding additional owners.
-
- - **Enterprise App Owner.** Grants the ability to manage many aspects of owned enterprise apps, including SSO settings, app assignments, and consent (except to Azure AD resources).
-
-For more information about public preview, see the [Azure AD delegated application management roles are in public preview!](https://cloudblogs.microsoft.com/enterprisemobility/2018/06/13/hallelujah-azure-ad-delegated-application-management-roles-are-in-public-preview/) blog. For more information about roles and permissions, see [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md).
---
-## May 2018
-
-### ExpressRoute support changes
-
-**Type:** Plan for change
-**Service category:** Authentications (Logins)
-**Product capability:** Platform
-
-Software as a Service offering, like Azure Active Directory (Azure AD) are designed to work best by going directly through the Internet, without requiring ExpressRoute or any other private VPN tunnels. Because of this, on **August 1, 2018**, we'll stop supporting ExpressRoute for Azure AD services using Azure public peering and Azure communities in Microsoft peering. Any services impacted by this change might notice Azure AD traffic gradually shifting from ExpressRoute to the Internet.
-
-While we're changing our support, we also know there are still situations where you might need to use a dedicated set of circuits for your authentication traffic. Because of this, Azure AD will continue to support per-tenant IP range restrictions using ExpressRoute and services already on Microsoft peering with the "Other Office 365 Online services" community. If your services are impacted, but you require ExpressRoute, you must do the following:
--- **If you're on Azure public peering.** Move to Microsoft peering and sign up for the **Other Office 365 Online services (12076:5100)** community. For more info about how to move from Azure public peering to Microsoft peering, see the [Move a public peering to Microsoft peering](../../expressroute/how-to-move-peering.md) article.--- **If you're on Microsoft peering.** Sign up for the **Other Office 365 Online service (12076:5100)** community. For more info about routing requirements, see the [Support for BGP communities section](../../expressroute/expressroute-routing.md#bgp) of the ExpressRoute routing requirements article.-
-If you must continue to use dedicated circuits, you'll need to talk to your Microsoft Account team about how to get authorization to use the **Other Office 365 Online service (12076:5100)** community. The MS Office-managed review board will verify whether you need those circuits and make sure you understand the technical implications of keeping them. Unauthorized subscriptions trying to create route filters for Office 365 will receive an error message.
---
-### Microsoft Graph APIs for administrative scenarios for TOU
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Developer Experience
-
-We've added Microsoft Graph APIs for administration operation of Azure AD terms of use. You are able to create, update, delete the terms of use object.
---
-### Add Azure AD multi-tenant endpoint as an identity provider in Azure AD B2C
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-Using custom policies, you can now add the Azure AD common endpoint as an identity provider in Azure AD B2C. This allows you to have a single point of entry for all Azure AD users that are signing into your applications. For more information, see [Azure Active Directory B2C: Allow users to sign in to a multi-tenant Azure AD identity provider using custom policies](../../active-directory-b2c/identity-provider-azure-ad-multi-tenant.md).
---
-### Use Internal URLs to access apps from anywhere with our My Apps Sign-in Extension and the Azure AD Application Proxy
-
-**Type:** New feature
-**Service category:** My Apps
-**Product capability:** SSO
-
-Users can now access applications through internal URLs even when outside your corporate network by using the My Apps Secure Sign-in Extension for Azure AD. This will work with any application that you have published using Azure AD Application Proxy, on any browser that also has the Access Panel browser extension installed. The URL redirection functionality is automatically enabled once a user logs into the extension. The extension is available for download on [Microsoft Edge](https://go.microsoft.com/fwlink/?linkid=845176), [Chrome](https://go.microsoft.com/fwlink/?linkid=866367).
---
-### Azure Active Directory - Data in Europe for Europe customers
-
-**Type:** New feature
-**Service category:** Other
-**Product capability:** GoLocal
-
-Customers in Europe require their data to stay in Europe and not replicated outside of European datacenters for meeting privacy and European laws. This [article](./active-directory-data-storage-eu.md) provides the specific details on what identity information will be stored within Europe and also provide details on information that will be stored outside European datacenters.
---
-### New user provisioning SaaS app integrations - May 2018
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-Azure AD allows you to automate the creation, maintenance, and removal of user identities in SaaS applications such as Dropbox, Salesforce, ServiceNow, and more. For May 2018, we have added user provisioning support for the following applications in the Azure AD app gallery:
--- [BlueJeans](../saas-apps/bluejeans-provisioning-tutorial.md)--- [Cornerstone OnDemand](../saas-apps/cornerstone-ondemand-provisioning-tutorial.md)--- [Zendesk](../saas-apps/zendesk-provisioning-tutorial.md)-
-For a list of all applications that support user provisioning in the Azure AD gallery, see [https://aka.ms/appstutorial](../saas-apps/tutorial-list.md).
---
-### Azure AD access reviews of groups and app access now provides recurring reviews
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Governance
-
-Access review of groups and apps is now generally available as part of Azure AD Premium P2. Administrators will be able to configure access reviews of group memberships and application assignments to automatically recur at regular intervals, such as monthly or quarterly.
---
-### Azure AD Activity logs (sign-ins and audit) are now available through MS Graph
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-Azure AD Activity logs, which, includes Sign-ins and Audit logs, are now available through the Microsoft Graph API. We have exposed two end points through the Microsoft Graph API to access these logs. Check out our [documents](../reports-monitoring/concept-reporting-api.md) for programmatic access to Azure AD Reporting APIs to get started.
---
-### Improvements to the B2B redemption experience and leave an org
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-**Just in time redemption:** Once you share a resource with a guest user using B2B API ΓÇô you don't need to send out a special invitation email. In most cases, the guest user can access the resource and will be taken through the redemption experience just in time. No more impact due to missed emails. No more asking your guest users "Did you click on that redemption link the system sent you?". This means once SPO uses the invitation manager ΓÇô cloudy attachments can have the same canonical URL for all users ΓÇô internal and external ΓÇô in any state of redemption.
-
-**Modern redemption experience:** No more split screen redemption landing page. Users will see a modern consent experience with the inviting organization's privacy statement, just like they do for third-party apps.
-
-**Guest users can leave the org:** Once a user's relationship with an org is over, they can self-serve leaving the organization. No more calling the inviting org's admin to "be removed", no more raising support tickets.
---
-### New Federated Apps available in Azure AD app gallery - May 2018
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In May 2018, we've added these 18 new apps with Federation support to our app gallery:
-
-[AwardSpring](../saas-apps/awardspring-tutorial.md), Infogix Data3Sixty Govern, [Yodeck](../saas-apps/infogix-tutorial.md), [Jamf Pro](../saas-apps/jamfprosamlconnector-tutorial.md), [KnowledgeOwl](../saas-apps/knowledgeowl-tutorial.md), [Envi MMIS](../saas-apps/envimmis-tutorial.md), [LaunchDarkly](../saas-apps/launchdarkly-tutorial.md), [Adobe Captivate Prime](../saas-apps/adobecaptivateprime-tutorial.md), [Montage Online](../saas-apps/montageonline-tutorial.md), [まなびポケット](../saas-apps/manabipocket-tutorial.md), OpenReel, [Arc Publishing - SSO](../saas-apps/arc-tutorial.md), [PlanGrid](../saas-apps/plangrid-tutorial.md), [iWellnessNow](../saas-apps/iwellnessnow-tutorial.md), [Proxyclick](../saas-apps/proxyclick-tutorial.md), [Riskware](../saas-apps/riskware-tutorial.md), [Flock](../saas-apps/flock-tutorial.md), [Reviewsnap](../saas-apps/reviewsnap-tutorial.md)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### New step-by-step deployment guides for Azure Active Directory
-
-**Type:** New feature
-**Service category:** Other
-**Product capability:** Directory
-
-New, step-by-step guidance about how to deploy Azure Active Directory (Azure AD), including self-service password reset (SSPR), single sign-on (SSO), Conditional Access, App proxy, User provisioning, Active Directory Federation Services (ADFS) to Pass-through Authentication (PTA), and ADFS to Password hash sync (PHS).
-
-To view the deployment guides, go to the [Identity Deployment Guides](./active-directory-deployment-plans.md) repo on GitHub. To provide feedback about the deployment guides, use the [Deployment Plan Feedback form](https://aka.ms/deploymentplanfeedback). If you have any questions about the deployment guides, contact us at [IDGitDeploy](mailto:idgitdeploy@microsoft.com).
---
-### Enterprise Applications Search - Load More Apps
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-Having trouble finding your applications / service principals? We've added the ability to load more applications in your enterprise applications all applications list. By default, we show 20 applications. You can now click, **Load more** to view additional applications.
---
-### The May release of AADConnect contains a public preview of the integration with PingFederate, important security updates, many bug fixes, and new great new troubleshooting tools.
-
-**Type:** Changed feature
-**Service category:** AD Connect
-**Product capability:** Identity Lifecycle Management
-
-The May release of AADConnect contains a public preview of the integration with PingFederate, important security updates, many bug fixes, and new great new troubleshooting tools. You can find the release notes [here](../hybrid/reference-connect-version-history.md).
---
-### Azure AD access reviews: auto-apply
-
-**Type:** Changed feature
-**Service category:** Access Reviews
-**Product capability:** Governance
-
-Access reviews of groups and apps are now generally available as part of Azure AD Premium P2. An administrator can configure to automatically apply the reviewer's changes to that group or app as the access review completes. The administrator can also specify what happens to the user's continued access if reviewers didn't respond, remove access, keep access, or take system recommendations.
---
-### ID tokens can no longer be returned using the query response_mode for new apps.
-
-**Type:** Changed feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Apps created on or after April 25, 2018 will no longer be able to request an **id_token** using the **query** response_mode. This brings Azure AD inline with the OIDC specifications and helps reduce your apps attack surface. Apps created before April 25, 2018 are not blocked from using the **query** response_mode with a response_type of **id_token**. The error returned, when requesting an id_token from Azure AD, is **AADSTS70007: 'query' is not a supported value of 'response_mode' when requesting a token**.
-
-The **fragment** and **form_post** response_modes continue to work - when creating new application objects (for example, for App Proxy usage), ensure use of one of these response_modes before they create a new application.
---
-## April 2018
-
-### Azure AD B2C Access Token are GA
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-You can now access Web APIs secured by Azure AD B2C using access tokens. The feature is moving from public preview to GA. The UI experience to configure Azure AD B2C applications and web APIs has been improved, and other minor improvements were made.
-
-For more information, see [Azure AD B2C: Requesting access tokens](../../active-directory-b2c/access-tokens.md).
---
-### Test single sign-on configuration for SAML-based applications
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-When configuring SAML-based SSO applications, you're able to test the integration on the configuration page. If you encounter an error during sign in, you can provide the error in the testing experience and Azure AD provides you with resolution steps to solve the specific issue.
-
-For more information, see:
--- [Configuring single sign-on to applications that are not in the Azure Active Directory application gallery](../manage-apps/view-applications-portal.md)-- [How to debug SAML-based single sign-on to applications in Azure Active Directory](../manage-apps/debug-saml-sso-issues.md)---
-### Azure AD terms of use now has per user reporting
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Compliance
-
-Administrators can now select a given ToU and see all the users that have consented to that ToU and what date/time it took place.
-
-For more information, see the [Azure AD terms of use feature](../conditional-access/terms-of-use.md).
---
-### Azure AD Connect Health: Risky IP for AD FS extranet lockout protection
-
-**Type:** New feature
-**Service category:** Other
-**Product capability:** Monitoring & Reporting
-
-Connect Health now supports the ability to detect IP addresses that exceed a threshold of failed U/P logins on an hourly or daily basis. The capabilities provided by this feature are:
--- Comprehensive report showing IP address and the number of failed logins generated on an hourly/daily basis with customizable threshold.-- Email-based alerts showing when a specific IP address has exceeded the threshold of failed U/P logins on an hourly/daily basis.-- A download option to do a detailed analysis of the data-
-For more information, see [Risky IP Report](../hybrid/how-to-connect-health-adfs.md).
---
-### Easy app config with metadata file or URL
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-On the Enterprise applications page, administrators can upload a SAML metadata file to configure SAML based sign-on for Azure AD Gallery and Non-Gallery application.
-
-Additionally, you can use Azure AD application federation metadata URL to configure SSO with the targeted application.
-
-For more information, see [Configuring single sign-on to applications that are not in the Azure Active Directory application gallery](../manage-apps/view-applications-portal.md).
---
-### Azure AD Terms of use now generally available
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Compliance
--
-Azure AD terms of use have moved from public preview to generally available.
-
-For more information, see the [Azure AD terms of use feature](../conditional-access/terms-of-use.md).
---
-### Allow or block invitations to B2B users from specific organizations
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
--
-You can now specify which partner organizations you want to share and collaborate with in Azure AD B2B Collaboration. To do this, you can choose to create list of specific allow or deny domains. When a domain is blocked using these capabilities, employees can no longer send invitations to people in that domain.
-
-This helps you to control access to your resources, while enabling a smooth experience for approved users.
-
-This B2B Collaboration feature is available for all Azure Active Directory customers and can be used in conjunction with Azure AD Premium features like Conditional Access and identity protection for more granular control of when and how external business users sign in and gain access.
-
-For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
---
-### New federated apps available in Azure AD app gallery
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In April 2018, we've added these 13 new apps with Federation support to our app gallery:
-
-Criterion HCM, [FiscalNote](../saas-apps/fiscalnote-tutorial.md), [Secret Server (On-Premises)](../saas-apps/secretserver-on-premises-tutorial.md), [Dynamic Signal](../saas-apps/dynamicsignal-tutorial.md), [mindWireless](../saas-apps/mindwireless-tutorial.md), [OrgChart Now](../saas-apps/orgchartnow-tutorial.md), [Ziflow](../saas-apps/ziflow-tutorial.md), [AppNeta Performance Monitor](../saas-apps/appneta-tutorial.md), [Elium](../saas-apps/elium-tutorial.md), [Fluxx Labs](../saas-apps/fluxxlabs-tutorial.md), [Cisco Cloud](../saas-apps/ciscocloud-tutorial.md), Shelf, [SafetyNet](../saas-apps/safetynet-tutorial.md)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Grant B2B users in Azure AD access to your on-premises applications (public preview)
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-As an organization that uses Azure Active Directory (Azure AD) B2B collaboration capabilities to invite guest users from partner organizations to your Azure AD, you can now provide these B2B users access to on-premises apps. These on-premises apps can use SAML-based authentication or integrated Windows authentication (IWA) with Kerberos constrained delegation (KCD).
-
-For more information, see [Grant B2B users in Azure AD access to your on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md).
---
-### Get SSO integration tutorials from the Azure Marketplace
-
-**Type:** Changed feature
-**Service category:** Other
-**Product capability:** 3rd Party Integration
-
-If an application that is listed in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps?page=1) supports SAML based single sign-on, clicking **Get it now** provides you with the integration tutorial associated with that application.
---
-### Faster performance of Azure AD automatic user provisioning to SaaS applications
-
-**Type:** Changed feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-Previously, customers using the Azure Active Directory user provisioning connectors for SaaS applications (for example Salesforce, ServiceNow, and Box) could experience slow performance if their Azure AD tenants contained over 100,000 combined users and groups, and they were using user and group assignments to determine which users should be provisioned.
-
-On April 2, 2018, significant performance enhancements were deployed to the Azure AD provisioning service that greatly reduce the amount of time needed to perform initial synchronizations between Azure Active Directory and target SaaS applications.
-
-As a result, many customers that had initial synchronizations to apps that took many days or never completed, are now completing within a matter of minutes or hours.
-
-For more information, see [What happens during provisioning?](../..//active-directory/app-provisioning/how-provisioning-works.md)
---
-### Self-service password reset from Windows 10 lock screen for hybrid Azure AD joined machines
-
-**Type:** Changed feature
-**Service category:** Self Service Password Reset
-**Product capability:** User Authentication
-
-We have updated the Windows 10 SSPR feature to include support for machines that are hybrid Azure AD joined. This feature is available in Windows 10 RS4 allows users to reset their password from the lock screen of a Windows 10 machine. Users who are enabled and registered for self-service password reset can utilize this feature.
-
-For more information, see [Azure AD password reset from the login screen](../authentication/howto-sspr-windows.md).
---
-## March 2018
-
-### Certificate expire notification
-
-**Type:** Fixed
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-Azure AD sends a notification when a certificate for a gallery or non-gallery application is about to expire.
-
-Some users did not receive notifications for enterprise applications configured for SAML-based single sign-on. This issue was resolved. Azure AD sends notification for certificates expiring in 7, 30 and 60 days. You are able to see this event in the audit logs.
-
-For more information, see:
--- [Manage Certificates for federated single sign-on in Azure Active Directory](../manage-apps/manage-certificates-for-federated-single-sign-on.md)-- [Audit activity reports in the Azure portal](../reports-monitoring/concept-audit-logs.md)---
-### Twitter and GitHub identity providers in Azure AD B2C
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-You can now add Twitter or GitHub as an identity provider in Azure AD B2C. Twitter is moving from public preview to GA. GitHub is being released in public preview.
-
-For more information, see [What is Azure AD B2B collaboration?](../external-identities/what-is-b2b.md).
---
-### Restrict browser access using Intune Managed Browser with Azure AD application-based Conditional Access for iOS and Android
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-**Now in public preview!**
-
-**Intune Managed Browser SSO:** Your employees can use single sign-on across native clients (like Microsoft Outlook) and the Intune Managed Browser for all Azure AD-connected apps.
-
-**Intune Managed Browser Conditional Access Support:** You can now require employees to use the Intune Managed browser using application-based Conditional Access policies.
-
-Read more about this in our [blog post](https://cloudblogs.microsoft.com/enterprisemobility/2018/03/15/the-intune-managed-browser-now-supports-azure-ad-sso-and-conditional-access/).
-
-For more information, see:
--- [Setup application-based Conditional Access](../conditional-access/app-based-conditional-access.md)--- [Configure managed browser policies](/mem/intune/apps/manage-microsoft-edge)---
-### App Proxy Cmdlets in PowerShell GA Module
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-Support for Application Proxy cmdlets is now in the PowerShell GA Module! This does require you to stay updated on PowerShell modules - if you become more than a year behind, some cmdlets may stop working.
-
-For more information, see [AzureAD](/powershell/module/Azuread/).
---
-### Office 365 native clients are supported by Seamless SSO using a non-interactive protocol
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-User using Office 365 native clients (version 16.0.8730.xxxx and above) get a silent sign-on experience using Seamless SSO. This support is provided by the addition a non-interactive protocol (WS-Trust) to Azure AD.
-
-For more information, see [How does sign-in on a native client with Seamless SSO work?](../hybrid/how-to-connect-sso-how-it-works.md#how-does-sign-in-on-a-native-client-with-seamless-sso-work)
---
-### Users get a silent sign-on experience, with Seamless SSO, if an application sends sign-in requests to Azure AD's tenant endpoints
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Users get a silent sign-on experience, with Seamless SSO, if an application (for example, `https://contoso.sharepoint.com`) sends sign-in requests to Azure AD's tenant endpoints - that is, `https://login.microsoftonline.com/contoso.com/<..>` or `https://login.microsoftonline.com/<tenant_ID>/<..>` - instead of Azure AD's common endpoint (`https://login.microsoftonline.com/common/<...>`).
-
-For more information, see [Azure Active Directory Seamless Single Sign-On](../hybrid/how-to-connect-sso.md).
---
-### Need to add only one Azure AD URL, instead of two URLs previously, to users' Intranet zone settings to roll out Seamless SSO
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-To roll out Seamless SSO to your users, you need to add only one Azure AD URL to the users' Intranet zone settings by using group policy in Active Directory: `https://autologon.microsoftazuread-sso.com`. Previously, customers were required to add two URLs.
-
-For more information, see [Azure Active Directory Seamless Single Sign-On](../hybrid/how-to-connect-sso.md).
---
-### New Federated Apps available in Azure AD app gallery
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In March 2018, we've added these 15 new apps with Federation support to our app gallery:
-
-[Boxcryptor](../saas-apps/boxcryptor-tutorial.md), [CylancePROTECT](../saas-apps/cylanceprotect-tutorial.md), Wrike, [SignalFx](../saas-apps/signalfx-tutorial.md), Assistant by FirstAgenda, [YardiOne](../saas-apps/yardione-tutorial.md), Vtiger CRM, inwink, [Amplitude](../saas-apps/amplitude-tutorial.md), [Spacio](../saas-apps/spacio-tutorial.md), [ContractWorks](../saas-apps/contractworks-tutorial.md), [Bersin](../saas-apps/bersin-tutorial.md), [Mercell](../saas-apps/mercell-tutorial.md), [Trisotech Digital Enterprise Server](../saas-apps/trisotechdigitalenterpriseserver-tutorial.md), [Qumu Cloud](../saas-apps/qumucloud-tutorial.md).
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### PIM for Azure Resources is generally available
-
-**Type:** New feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-If you are using Azure AD Privileged Identity Management for directory roles, you can now use PIM's time-bound access and assignment capabilities for Azure Resource roles such as Subscriptions, Resource Groups, Virtual Machines, and any other resource supported by Azure Resource Manager. Enforce multifactor authentication when activating roles Just-In-Time, and schedule activations in coordination with approved change windows. In addition, this release adds enhancements not available during public preview including an updated UI, approval workflows, and the ability to extend roles expiring soon and renew expired roles.
-
-For more information, see [PIM for Azure resources (Preview)](../privileged-identity-management/azure-pim-resource-rbac.md)
---
-### Adding Optional Claims to your apps tokens (public preview)
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Your Azure AD app can now request custom or optional claims in JWTs or SAML tokens. These are claims about the user or tenant that are not included by default in the token, due to size or applicability constraints. This is currently in public preview for Azure AD apps on the v1.0 and v2.0 endpoints. See the documentation for information on what claims can be added and how to edit your application manifest to request them.
-
-For more information, see [Optional claims in Azure AD](../develop/active-directory-optional-claims.md).
---
-### Azure AD supports PKCE for more secure OAuth flows
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Azure AD docs have been updated to note support for PKCE, which allows for more secure communication during the OAuth 2.0 Authorization Code grant flow. Both S256 and plaintext code_challenges are supported on the v1.0 and v2.0 endpoints.
-
-For more information, see [Request an authorization code](../develop/v2-oauth2-auth-code-flow.md#request-an-authorization-code).
---
-### Support for provisioning all user attribute values available in the Workday Get_Workers API
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-The public preview of inbound provisioning from Workday to Active Directory and Azure AD now supports the ability to extract and provisioning all attribute values available in the Workday Get_Workers API. This adds supports for hundreds of additional standard and custom attributes beyond the ones shipped with the initial version of the Workday inbound provisioning connector.
-
-For more information, see: [Customizing the list of Workday user attributes](../saas-apps/workday-inbound-tutorial.md#customizing-the-list-of-workday-user-attributes)
---
-### Changing group membership from dynamic to static, and vice versa
-
-**Type:** New feature
-**Service category:** Group Management
-**Product capability:** Collaboration
-
-It is possible to change how membership is managed in a group. This is useful when you want to keep the same group name and ID in the system, so any existing references to the group are still valid; creating a new group would require updating those references.
-We've updated the Azure portal to support this functionality. Now, customers can convert existing groups from dynamic membership to assigned membership and vice-versa. The existing PowerShell cmdlets are also still available.
-
-For more information, see [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)
---
-### Improved sign-out behavior with Seamless SSO
-
-**Type:** Changed feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Previously, even if users explicitly signed out of an application secured by Azure AD, they would be automatically signed back in using Seamless SSO if they were trying to access an Azure AD application again within their corpnet from their domain joined devices. With this change, sign out is supported. This allows users to choose the same or different Azure AD account to sign back in with, instead of being automatically signed in using Seamless SSO.
-
-For more information, see [Azure Active Directory Seamless Single Sign-On](../hybrid/how-to-connect-sso.md)
---
-### Application Proxy Connector Version 1.5.402.0 Released
-
-**Type:** Changed feature
-**Service category:** App Proxy
-**Product capability:** Identity Security & Protection
-
-This connector version is gradually being rolled out through November. This new connector version includes the following changes:
--- The connector now sets domain level cookies instead subdomain level. This ensures a smoother SSO experience and avoids redundant authentication prompts.-- Support for chunked encoding requests-- Improved connector health monitoring-- Several bug fixes and stability improvements-
-For more information, see [Understand Azure AD Application Proxy connectors](../app-proxy/application-proxy-connectors.md).
--
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
Title: Create a Lifecycle Workflow- Azure AD (preview)
-description: This article guides a user to creating a workflow using Lifecycle Workflows
+ Title: Create a lifecycle workflow (preview) - Azure AD
+description: This article guides you in creating a lifecycle workflow.
-# Create a Lifecycle workflow (Preview)
-Lifecycle Workflows allows for tasks associated with the lifecycle process to be run automatically for users as they move through their life cycle in your organization. Workflows are made up of:
+# Create a lifecycle workflow (preview)
-Workflows can be created and customized for common scenarios using templates, or you can build a template from scratch without using a template. Currently if you use the Azure portal, a created workflow must be based off a template. If you wish to create a workflow without using a template, you must create it using Microsoft Graph.
+Lifecycle workflows (preview) allow for tasks associated with the lifecycle process to be run automatically for users as they move through their lifecycle in your organization. Workflows consist of:
+
+- **Tasks**: Actions taken when a workflow is triggered.
+- **Execution conditions**: The who and when of a workflow. These conditions define which users (scope) this workflow should run against, and when (trigger) the workflow should run.
+
+You can create and customize workflows for common scenarios by using templates, or you can build a workflow from scratch without using a template. Currently, if you use the Azure portal, any workflow that you create must be based on a template. If you want to create a workflow without using a template, use Microsoft Graph.
## Prerequisites
-The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements).
+The preview of lifecycle workflows requires Azure Active Directory (Azure AD) Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements).
+
+## Create a lifecycle workflow by using a template in the Azure portal
-## Create a Lifecycle workflow using a template in the Azure portal
+If you're using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. These templates include one for pre-hire common scenarios.
-If you are using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. This means you can customize the pre-hire common scenario template. To create a workflow based on one of these templates using the Azure portal do the following steps:
+To create a workflow based on a template:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Select **Azure Active Directory** > **Identity Governance**.
-1. In the left menu, select **Lifecycle Workflows (Preview)**.
+1. On the left menu, select **Lifecycle Workflows (Preview)**.
-1. select **Workflows (Preview)**
+1. Select **Workflows (Preview)**.
-1. On the workflows screen, select the workflow template that you want to use.
- :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflows templates." lightbox="media/create-lifecycle-workflow/template-list.png":::
-1. Enter a unique display name and description for the workflow and select **Next**.
- :::image type="content" source="media/create-lifecycle-workflow/template-basics.png" alt-text="Screenshot of workflow template basic information.":::
+1. On the **Choose a workflow** page, select the workflow template that you want to use.
-1. On the **configure scope** page select the **Trigger type** and execution conditions to be used for this workflow. For more information on what can be configured, see: [Configure scope](understanding-lifecycle-workflows.md#configure-scope).
+ :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflow templates." lightbox="media/create-lifecycle-workflow/template-list.png":::
+1. On the **Basics** tab, enter a unique display name and description for the workflow, and then select **Next**.
-1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
+ :::image type="content" source="media/create-lifecycle-workflow/template-basics.png" alt-text="Screenshot of basic information about a workflow template.":::
- :::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options.":::
+1. On the **Configure scope** tab, select the trigger type and execution conditions to be used for this workflow. For more information on what you can configure, see [Configure scope](understanding-lifecycle-workflows.md#configure-scope).
-1. To view your rule syntax, select the **View rule syntax** button. You can copy and paste multiple user property rules on this screen. For more detailed information on which properties that can be included see: [User Properties](/graph/aad-advanced-queries?tabs=http#user-properties). When you are finished adding rules, select **Next**.
- :::image type="content" source="media/create-lifecycle-workflow/template-syntax.png" alt-text="Screenshot of workflow rule syntax.":::
+1. Under **Rule**, enter values for **Property**, **Operator**, and **Value**. The following screenshot gives an example of a rule being set up for a sales department. For a full list of user properties that lifecycle workflows support, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters).
-1. On the **Review tasks** page you can add a task to the template by selecting **Add task**. To enable an existing task on the list, select **enable**. You're also able to disable a task by selecting **disable**. To remove a task from the template, select **Remove** on the selected task. When you are finished with tasks for your workflow, select **Next**.
+ :::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of scope configuration options for a lifecycle workflow template.":::
- :::image type="content" source="media/create-lifecycle-workflow/template-tasks.png" alt-text="Screenshot of adding tasks to templates.":::
+1. To view your rule syntax, select the **View rule syntax** button. You can copy and paste multiple user property rules on the panel that appears. For more information on which properties you can include, see [User properties](/graph/aad-advanced-queries?tabs=http#user-properties). When you finish adding rules, select **Next**.
-1. On the **Review+create** page you are able to review the workflow's settings. You can also choose whether or not to enable the schedule for the workflow. Select **Create** to create the workflow.
+ :::image type="content" source="media/create-lifecycle-workflow/template-syntax.png" alt-text="Screenshot of workflow rule syntax.":::
+
+1. On the **Review tasks** tab, you can add a task to the template by selecting **Add task**. To enable an existing task on the list, select **Enable**. To disable a task, select **Disable**. To remove a task from the template, select **Remove**.
- :::image type="content" source="media/create-lifecycle-workflow/template-review.png" alt-text="Screenshot of reviewing and creating a template.":::
+ When you're finished with tasks for your workflow, select **Next: Review and create**.
+
+ :::image type="content" source="media/create-lifecycle-workflow/template-tasks.png" alt-text="Screenshot of adding tasks to templates.":::
+1. On the **Review and create** tab, review the workflow's settings. You can also choose whether or not to enable the schedule for the workflow. Select **Create** to create the workflow.
+ :::image type="content" source="media/create-lifecycle-workflow/template-review.png" alt-text="Screenshot of reviewing and creating a workflow.":::
> [!IMPORTANT]
-> By default, a newly created workflow is disabled to allow for the testing of it first on smaller audiences. For more information about testing workflows before rolling them out to many users, see: [run an on-demand workflow](on-demand-workflow.md).
+> By default, a newly created workflow is disabled to allow for the testing of it first on smaller audiences. For more information about testing workflows before rolling them out to many users, see [Run an on-demand workflow](on-demand-workflow.md).
-## Create a workflow using Microsoft Graph
+## Create a lifecycle workflow by using Microsoft Graph
-To create a workflow using Microsoft Graph API, see [Create workflow (lifecycle workflow)](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows)
+To create a lifecycle workflow by using the Microsoft Graph API, see [Create workflow](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows).
## Next steps - [Manage a workflow's properties](manage-workflow-properties.md)-- [Manage Workflow Versions](manage-workflow-tasks.md)
+- [Manage workflow versions](manage-workflow-tasks.md)
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
Title: 'Delete a Lifecycle workflow'
-description: Describes how to delete a Lifecycle Workflow using.
+ Title: Delete a lifecycle workflow
+description: Learn how to delete a lifecycle workflow.
-# Delete a Lifecycle workflow (Preview)
+# Delete a lifecycle workflow (preview)
-You can remove workflows that are no longer needed. Deleting these workflows allows you to make sure your lifecycle strategy is up to date. When a workflow is deleted, it enters a soft delete state. During this period, it's still able to be viewed within the deleted workflows list, and can be restored if needed. 30 days after a workflow enters a soft delete state it will be permanently removed. If you don't wish to wait 30 days for a workflow to permanently delete you can always manually delete it yourself.
+You can remove workflows that you no longer need. Deleting these workflows helps keep your lifecycle strategy up to date.
+
+When a workflow is deleted, it enters a soft-delete state. During this period, you can still view it in the list of deleted workflows and restore it if needed. A workflow is permanently removed 30 days after it enters a soft-delete state. If you don't want to wait 30 days for a workflow to be permanently deleted, you can manually delete it.
## Prerequisites
-The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements).
+The preview of lifecycle workflows requires Azure Active Directory (Azure AD) Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements).
-## Delete a workflow using the Azure portal
+## Delete a workflow by using the Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Type in **Identity Governance** on the search bar near the top of the page and select it.
-
-1. In the left menu, select **Lifecycle Workflows (Preview)**.
+1. On the search bar near the top of the page, enter **Identity Governance**. Then select **Identity Governance** in the results.
-1. select **Workflows (Preview)**.
+1. On the left menu, select **Lifecycle Workflows (Preview)**.
-1. On the workflows screen, select the workflow you want to delete.
+1. Select **Workflows (Preview)**.
- :::image type="content" source="media/delete-lifecycle-workflow/delete-button.png" alt-text="Screenshot of list of Workflows to delete.":::
+1. On the **Workflows** page, select the workflow that you want to delete. Then select **Delete**.
-1. With the workflow highlighted, select **Delete**.
+ :::image type="content" source="media/delete-lifecycle-workflow/delete-button.png" alt-text="Screenshot of a list of workflows with one selected, along with the Delete button.":::
-1. Confirm you want to delete the selected workflow.
-
- :::image type="content" source="media/delete-lifecycle-workflow/delete-workflow.png" alt-text="Screenshot of confirming to delete a workflow.":::
+1. Confirm that you want to delete the workflow by selecting the **Delete** button.
-## View deleted workflows
+ :::image type="content" source="media/delete-lifecycle-workflow/delete-workflow.png" alt-text="Screenshot of confirming the deletion of a workflow.":::
-After deleting workflows, you can view them on the **Deleted Workflows (Preview)** page.
+## View deleted workflows in the Azure portal
+After you delete workflows, you can view them on the **Deleted workflows** page.
-1. On the left of the screen, select **Deleted Workflows (Preview)**.
+1. On the left pane, select **Deleted workflows (Preview)**.
-1. On this page, you'll see a list of deleted workflows, a description of the workflow, what date it was deleted, and its permanent delete date. By default the permanent delete date for a workflow is always 30 days after it was originally deleted.
+1. On the **Deleted workflows** page, check the list of deleted workflows. Each workflow has a description, the date of deletion, and a permanent delete date. By default, the permanent delete date for a workflow is 30 days after it was originally deleted.
- :::image type="content" source="media/delete-lifecycle-workflow/deleted-list.png" alt-text="Screenshot of a list of deleted workflows.":::
-
-1. To restore a deleted workflow, select the workflow you want to restore and select **Restore workflow**.
+ :::image type="content" source="media/delete-lifecycle-workflow/deleted-list.png" alt-text="Screenshot of a list of deleted workflows.":::
-1. To permanently delete a workflow immediately, you select the workflow you want to delete from the list, and select **Delete permanently**.
+1. To restore a deleted workflow, select it and then select **Restore workflow**.
+ To permanently delete a workflow immediately, select it and then select **Delete permanently**.
-
+## Delete a workflow by using Microsoft Graph
-## Delete a workflow using Microsoft Graph
+To delete a workflow by using an API via Microsoft Graph, see [Delete a lifecycle workflow](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta&preserve-view=true).
-To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta&preserve-view=true).
+## View deleted workflows by using Microsoft Graph
-## View deleted workflows using Microsoft Graph
+To view a list of deleted workflows by using an API via Microsoft Graph, see [List deleted workflows](/graph/api/identitygovernance-lifecycleworkflowscontainer-list-deleteditems).
-To View a list of deleted workflows using API via Microsoft Graph, see: [List deleted workflows](/graph/api/identitygovernance-lifecycleworkflowscontainer-list-deleteditems).
+## Permanently delete a workflow by using Microsoft Graph
-## Permanently delete a workflow using Microsoft Graph
+To permanently delete a workflow by using an API via Microsoft Graph, see [Permanently delete a deleted workflow](/graph/api/identitygovernance-deleteditemcontainer-delete).
-To permanently delete a workflow using API via Microsoft Graph, see: [Permanently delete a deleted workflow](/graph/api/identitygovernance-deleteditemcontainer-delete)
+## Restore a deleted workflow by using Microsoft Graph
-## Restore deleted workflows using Microsoft Graph
+To restore a deleted workflow by using an API via Microsoft Graph, see [Restore a deleted workflow](/graph/api/identitygovernance-workflow-restore).
-To restore a deleted workflow using API via Microsoft Graph, see: [Restore a deleted workflow](/graph/api/identitygovernance-workflow-restore)
> [!NOTE]
-> Permanently deleted workflows are not able to be restored.
+> You can't restore permanently deleted workflows.
## Next steps -- [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md)-- [Manage Lifecycle Workflow Versions](manage-workflow-tasks.md)
+- [What are lifecycle workflows?](what-are-lifecycle-workflows.md)
+- [Manage lifecycle workflow versions](manage-workflow-tasks.md)
active-directory Entitlement Management Access Package Auto Assignment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md
You can use rules to determine access package assignment based on user properties in Azure Active Directory (Azure AD), part of Microsoft Entra. In Entitlement Management, an access package can have multiple policies, and each policy establishes how users get an assignment to the access package, and for how long. As an administrator, you can establish a policy for automatic assignments by supplying a membership rule, that Entitlement Management will follow to create and remove assignments automatically. Similar to a [dynamic group](../enterprise-users/groups-create-rule.md), when an automatic assignment policy is created, user attributes are evaluated for matches with the policy's membership rule. When an attribute changes for a user, these automatic assignment policy rules in the access packages are processed for membership changes. Assignments to users are then added or removed depending on whether they meet the rule criteria.
-During this preview, you can have at most one automatic assignment policy in an access package.
+You can have at most one automatic assignment policy in an access package, and the policy can only be created by an administrator.
This article describes how to create an access package automatic assignment policy for an existing access package.
You'll need to have attributes populated on the users who will be in scope for b
To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package.
-**Prerequisite role:** Global administrator, Identity Governance administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator or Identity Governance administrator
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Lifecycle Workflow Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md
# Lifecycle Workflows Custom Task Extension (Preview)
-Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you're able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. By calling out to the external systems, you're able to accomplish things, which can extend the purpose of your workflows. When a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](../../logic-apps/logic-apps-overview.md).
+Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you're able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. For example, when a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](../../logic-apps/logic-apps-overview.md).
-## Prerequisite Logic App roles required for integration with the custom task extension
+## Logic Apps prerequisites
-When you link your Azure Logic App with the custom task extension task, there are certain prerequisites that must be completed before the link can be established.
+To link a Azure Logic App with a custom task extension, the following prerequisites must be available:
-To create a Logic App, you must have:
+- An Azure subscription
+- A resource group
+- Permissions to create a new consumption-based Logic App or access to an existing consumption-based Logic App
-- A valid Azure subscription-- A compatible resource group where the Logic App is located-
-> [!NOTE]
-> The resource group needs permissions to create, update, and read the Logic App while the custom extension is being created.
-
-The roles on the Azure Logic App required with the custom task extension, are as follows:
+One of the following Azure role assignments is required either on the Logic App itself or on a higher scope such as the resource group, subscription or management group:
- **Logic App contributor** - **Contributor** - **Owner** > [!NOTE]
-> The **Logic App Operator** role alone will not work with the custom task extension. For more information on the required **Logic App contributor** role, see: [Logic App Contributor](../../role-based-access-control/built-in-roles.md#logic-app-contributor).
+> The **Logic App Operator** role is not sufficient.
## Custom task extension deployment scenarios
When creating custom task extensions, the scenarios for how it interacts with Li
:::image type="content" source="media/lifecycle-workflow-extensibility/task-extension-deployment-scenarios.png" alt-text="Screenshot of custom task deployment scenarios."::: - **Launch and continue** - The Azure Logic App is started, and the following task execution immediately continues with no response expected from the Azure Logic App. This scenario is best suited if the Lifecycle workflow doesn't require any feedback (including status) from the Azure Logic App. If the Logic App is started successfully, the Lifecycle Workflow task is considered a success.-- **Launch and wait** - The Azure Logic App is started, and the following task's execution waits on the response from the Logic App. You enter a time duration for how long the custom task extension should wait for a response from the Azure Logic App. If no response is received within a customer defined duration window, the task is considered failed.
+- **Launch and wait** - The Azure Logic App is started, and the following task's execution waits on the response from the Logic App. You enter a time duration for how long the custom task extension should wait for a response from the Azure Logic App. If no response is received within the defined duration window, the task is considered failed.
:::image type="content" source="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png" alt-text="Screenshot of custom task launch and wait task choice." lightbox="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png"::: > [!NOTE]
-> You can also deploy a custom task that calls to a third party system. To learn more about this call, see: [taskProcessingResult: resume](/graph/api/identitygovernance-taskprocessingresult-resume).
+> The response does not necessarily have to be provided by the Logic App, a third party system is able to respond if the Logic App only acts as an intermediary. To learn more about this, see: [taskProcessingResult: resume](/graph/api/identitygovernance-taskprocessingresult-resume).
+ ## Response authorization
-When you create a custom task extension that waits for a response from the Logic App, you're able to define which applications can send a response
+When you create a custom task extension that waits for a response from the Logic App, you're able to define which applications can send a response.
:::image type="content" source="media/lifecycle-workflow-extensibility/launch-wait-options.png" alt-text="Screenshot of custom task extension launch and wait options.":::
-Response authorization can be utilized in one of the following ways:
+The response can be authorized in one of the following ways:
-- **System-assigned managed identity (Default)** - With this choice you Enable and utilize the Logic Apps system-assigned managed identity. For more information, see: [Authenticate access to Azure resources with managed identities in Azure Logic Apps](/azure/logic-apps/create-managed-service-identity)-- **No authorization** - With this choice you assign a Logic App or third party application an application permission (LifecycleWorkflows.ReadWrite.All), or role assignment (Lifecycle Workflows Administrator). This choice doesn't follow least privilege access as outlined in Azure Active Directory best practices. For more information on best practices for roles, see: [Best Practices for Azure AD roles](/azure/active-directory/roles/best-practices).-- **Existing application** - With this choice you're able to choose an existing application to respond. You are able to choose applications that are user-assigned or regular applications. For more information on managed identity types, see: [Managed identity types](../managed-identities-azure-resources/overview.md#managed-identity-types).
+- **System-assigned managed identity (Default)** - With this choice you enable and utilize the Logic Apps system-assigned managed identity. For more information, see: [Authenticate access to Azure resources with managed identities in Azure Logic Apps](/azure/logic-apps/create-managed-service-identity)
+- **No authorization** - With this choice no authorization will be granted, and you separately have to assign an application permission (LifecycleWorkflows.ReadWrite.All), or role assignment (Lifecycle Workflows Administrator). If an application is responding we do not recommend this option, as it is not following the principle of least privilege. This option may also be used if responses are only provided on behalf of a user (LifecycleWorkflows.ReadWrite.All delegated permission AND Lifecycle Workflows Administrator role assignment)
+- **Existing application** - With this choice you're able to choose an existing application to respond. This can be a regular application as well as a system or user-assigned managed identity. For more information on managed identity types, see: [Managed identity types](../managed-identities-azure-resources/overview.md#managed-identity-types).
## Custom task extension integration with Azure Logic Apps high-level steps The high-level steps for the Azure Logic Apps integration are as follows: > [!NOTE]
-> Creating a custom task extension and logic app through the workflows page in the Azure portal will automate most of these steps. For a guide on creating a custom task extension this way, see: [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md).
+> Creating a custom task extension and logic app through the Azure portal will automate most of these steps. For a guide on creating a custom task extension this way, see: [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md).
- **Create a consumption-based Azure Logic App**: A consumption-based Azure Logic App that is used to be called to from the custom task extension.-- **Configure the Azure Logic App so its compatible with Lifecycle workflows**: Configuring the consumption-based Azure Logic App so that it can be used with the custom task extension.
+- **Configure the Azure Logic App so its compatible with Lifecycle workflows**: Configuring the consumption-based Azure Logic App so that it can be used with the custom task extension. For more information, see: [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md)
- **Build your custom business logic within your Azure Logic App**: Set up your business logic within the Azure Logic App using Logic App designer. - **Create a lifecycle workflow customTaskExtension which holds necessary information about the Azure Logic App**: Creating a custom task extension that references the configured Azure Logic App. - **Update or create a Lifecycle workflow with the ΓÇ£Run a custom task extensionΓÇ¥ task, referencing your created customTaskExtension**: Adding the newly created custom task extension to a new workflow, or updating the information to an existing workflow.
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
Last updated 01/26/2023
# Lifecycle Workflow built-in tasks (Preview)
-Lifecycle Workflows come with many pre-configured tasks that are designed to automate common lifecycle management scenarios. These built-in tasks can be utilized to make customized workflows to suit your organization's needs. These tasks can be configured within seconds to create new workflows. These tasks also have categories based on the Joiner-Mover-Leaver model so that they can be easily placed into workflows based on need. In this article you'll get the complete list of tasks, information on common parameters each task has, and a list of unique parameters needed for each specific task.
+Lifecycle Workflows come with many pre-configured tasks that are designed to automate common lifecycle management scenarios. These built-in tasks can be utilized to make customized workflows to suit your organization's needs. These tasks can be configured within seconds to create new workflows. These tasks also have categories based on the Joiner-Mover-Leaver model so that they can be easily placed into workflows based on need. In this article you get the complete list of tasks, information on common parameters each task has, and a list of unique parameters needed for each specific task.
## Supported tasks
Common task parameters are the non-unique parameters contained in every task. Wh
||| |category | A read-only string that identifies the category or categories of the task. Automatically determined when the taskDefinitionID is chosen. | |taskDefinitionId | A string referencing a taskDefinition that determines which task to run. |
-|isEnabled | A boolean value that denotes whether the task is set to run or not. If set to ΓÇ£true" then the task will run. Defaults to true. |
+|isEnabled | A boolean value that denotes whether the task is set to run or not. If set to ΓÇ£true" then the task runs. Defaults to true. |
|displayName | A unique string that identifies the task. | |description | A string that describes the purpose of the task for administrative use. (Optional) |
-|executionSequence | A read-only integer that states in what order the task will run in a workflow. For more information about executionSequence and workflow order, see: [Configure Scope](understanding-lifecycle-workflows.md#configure-scope). |
+|executionSequence | A read-only integer that states in what order the task runs in a workflow. For more information about executionSequence and workflow order, see: [Configure Scope](understanding-lifecycle-workflows.md#configure-scope). |
|continueOnError | A boolean value that determines if the failure of this task stops the subsequent workflows from running. | |arguments | Contains unique parameters relevant for the given task. |
Emails, sent from tasks, are able to be customized. If you choose to customize t
- **Subject:** Customizes the subject of emails. - **Message body:** Customizes the body of the emails being sent out.-- **Email language translation:** Overrides the email recipient's language settings. Custom text is not customized, and it is recommended to set this language to the same language as the custom text.
+- **Email language translation:** Overrides the email recipient's language settings. Custom text isn't customized, and it's recommended to set this language to the same language as the custom text.
:::image type="content" source="media/lifecycle-workflow-task/customize-email-concept.png" alt-text="Screenshot of the customization email options.":::
The Azure AD prerequisite to run the **Send welcome email to new hire** task is:
- A populated mail attribute for the user.
-For Microsoft Graph the parameters for the **Send welcome email to new hire** task are as follows:
+For Microsoft Graph, the parameters for the **Send welcome email to new hire** task are as follows:
|Parameter |Definition | |||
The Azure AD prerequisite to run the **Send onboarding reminder email** task is:
- A populated manager's mail attribute for the user.
-For Microsoft Graph the parameters for the **Send onboarding reminder email** task are as follows:
+For Microsoft Graph, the parameters for the **Send onboarding reminder email** task are as follows:
|Parameter |Definition | |||
The Azure AD prerequisites to run the **Generate Temporary Access Pass and send
> [!IMPORTANT] > A user having this task run for them in a workflow must also not have any other authentication methods, sign-ins, or AAD role assignments for this task to work for them.
-For Microsoft Graph the parameters for the **Generate Temporary Access Pass and send via email to user's manager** task are as follows:
+For Microsoft Graph, the parameters for the **Generate Temporary Access Pass and send via email to user's manager** task are as follows:
|Parameter |Definition | |||
For Microsoft Graph the parameters for the **Generate Temporary Access Pass and
### Add user to groups
-Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups aren't supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task. :::image type="content" source="media/lifecycle-workflow-task/add-group-task.png" alt-text="Screenshot of Workflows task: Add user to group task.":::
-For Microsoft Graph the parameters for the **Add user to groups** task are as follows:
+For Microsoft Graph, the parameters for the **Add user to groups** task are as follows:
|Parameter |Definition | |||
You're able to add a user to an existing static team. You're able to customize t
:::image type="content" source="media/lifecycle-workflow-task/add-team-task.png" alt-text="Screenshot of Workflows task: add user to team.":::
-For Microsoft Graph the parameters for the **Add user to teams** task are as follows:
+For Microsoft Graph, the parameters for the **Add user to teams** task are as follows:
|Parameter |Definition | |||
For Microsoft Graph the parameters for the **Add user to teams** task are as fol
### Enable user account
-Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal.
+Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments aren't supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/enable-task.png" alt-text="Screenshot of Workflows task: enable user account.":::
-For Microsoft Graph the parameters for the **Enable user account** task are as follows:
+For Microsoft Graph, the parameters for the **Enable user account** task are as follows:
|Parameter |Definition | |||
The Azure AD prerequisite to run the **Run a Custom Task Extension** task is:
- A Logic App that is compatible with the custom task extension. For more information, see: [Lifecycle workflow extensibility](lifecycle-workflow-extensibility.md).
-For Microsoft Graph the parameters for the **Run a Custom Task Extension** task are as follows:
+For Microsoft Graph, the parameters for the **Run a Custom Task Extension** task are as follows:
|Parameter |Definition | |||
For more information on setting up a Logic app to run with Lifecycle Workflows,
### Disable user account
-Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal.
+Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments aren't supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/disable-task.png" alt-text="Screenshot of Workflows task: disable user account.":::
-For Microsoft Graph the parameters for the **Disable user account** task are as follows:
+For Microsoft Graph, the parameters for the **Disable user account** task are as follows:
|Parameter |Definition | |||
For Microsoft Graph the parameters for the **Disable user account** task are as
### Remove user from selected groups
-Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups aren't supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azure portal.
You're able to customize the task name and description for this task in the Azur
-For Microsoft Graph the parameters for the **Remove user from selected groups** task are as follows:
+For Microsoft Graph, the parameters for the **Remove user from selected groups** task are as follows:
|Parameter |Definition | |||
For Microsoft Graph the parameters for the **Remove user from selected groups**
### Remove users from all groups
-Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and role-assignable groups aren't supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azur
:::image type="content" source="media/lifecycle-workflow-task/remove-all-groups-task.png" alt-text="Screenshot of Workflows task: remove user from all groups.":::
-For Microsoft Graph the parameters for the **Remove users from all groups** task are as follows:
+For Microsoft Graph, the parameters for the **Remove users from all groups** task are as follows:
|Parameter |Definition | |||
For Microsoft Graph the parameters for the **Remove users from all groups** task
Allows a user to be removed from one or multiple static teams. You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-user-team-task.png" alt-text="Screenshot of Workflows task: remove user from teams.":::
-For Microsoft Graph the parameters for the **Remove User from Teams** task are as follows:
+For Microsoft Graph, the parameters for the **Remove User from Teams** task are as follows:
|Parameter |Definition | |||
For Microsoft Graph the parameters for the **Remove User from Teams** task are a
Allows users to be removed from every static team they're a member of. You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-user-all-team-task.png" alt-text="Screenshot of Workflows task: remove user from all teams.":::
-For Microsoft Graph the parameters for the **Remove users from all teams** task are as follows:
+For Microsoft Graph, the parameters for the **Remove users from all teams** task are as follows:
|Parameter |Definition | |||
Allows all direct license assignments to be removed from a user. For group-based
You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-license-assignment-task.png" alt-text="Screenshot of Workflows task: remove all licenses from users.":::
-For Microsoft Graph the parameters for the **Remove all license assignment from user** task are as follows:
+For Microsoft Graph, the parameters for the **Remove all license assignment from user** task are as follows:
|Parameter |Definition | |||
For Microsoft Graph the parameters for the **Remove all license assignment from
### Delete User
-Allows cloud-only user accounts to be deleted. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal.
+Allows cloud-only user accounts to be deleted. Users with Azure AD role assignments aren't supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/delete-user-task.png" alt-text="Screenshot of Workflows task: Delete user account.":::
-For Microsoft Graph the parameters for the **Delete User** task are as follows:
+For Microsoft Graph, the parameters for the **Delete User** task are as follows:
|Parameter |Definition | |||
The Azure AD prerequisite to run the **Send email on user last day** task are:
- A populated manager attribute for the user. - A populated manager's mail attribute for the user.
-For Microsoft Graph the parameters for the **Send email on user last day** task are as follows:
+For Microsoft Graph, the parameters for the **Send email on user last day** task are as follows:
|Parameter |Definition | |||
The Azure AD prerequisite to run the **Send email to users manager after their l
- A populated manager's mail attribute for the user.
-For Microsoft Graph the parameters for the **Send email to users manager after their last day** task are as follows:
+For Microsoft Graph, the parameters for the **Send email to users manager after their last day** task are as follows:
|Parameter |Definition | |||
active-directory Tutorial Offboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md
Title: 'Execute employee off-boarding tasks in real-time on their last day of work with Azure portal (preview)'
-description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Azure portal (preview).
+ Title: Execute employee termination tasks by using lifecycle workflows (preview)
+description: Learn how to remove users from an organization in real time on their last day of work by using lifecycle workflows (preview) in the Azure portal.
-# Execute employee off-boarding tasks in real-time on their last day of work with Azure portal (preview)
+# Execute employee termination tasks by using lifecycle workflows (preview)
-This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the Azure portal.
+This tutorial provides a step-by-step guide on how to execute a real-time employee termination by using lifecycle workflows (preview) in the Azure portal.
-This off-boarding scenario runs a workflow on-demand and accomplishes the following tasks:
-
-1. Remove user from all groups
-2. Remove user from all Teams
-3. Delete user account
+This *leaver* scenario runs a workflow on demand and accomplishes the following tasks:
-You may learn more about running a workflow on-demand [here](on-demand-workflow.md).
+- Remove the user from all groups.
+- Remove the user from all Microsoft Teams memberships.
+- Delete the user account.
+
+For more information, see [Run a workflow on demand](on-demand-workflow.md).
## Prerequisites
-The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements).
+The preview of lifecycle workflows requires Azure Active Directory (Azure AD) Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements).
+
+## Before you begin
+
+As part of the prerequisites for completing this tutorial, you need an account that has group and Teams memberships and that can be deleted during the tutorial. For comprehensive instructions on how to complete these prerequisite steps, see [Prepare user accounts for lifecycle workflows](tutorial-prepare-azure-ad-user-accounts.md).
+
+The leaver scenario includes the following steps:
+
+1. Prerequisite: Create a user account that represents an employee leaving your organization.
+1. Prerequisite: Prepare the user account with group and Teams memberships.
+1. Create the lifecycle management workflow.
+1. Run the workflow on demand.
+1. Verify that the workflow was successfully executed.
+
+## Create a workflow by using the leaver template
+
+Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination by using lifecycle workflows in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. On the right, select **Azure Active Directory**.
+3. Select **Identity Governance**.
+4. Select **Lifecycle workflows (Preview)**.
+5. On the **Overview** tab, select **New workflow**.
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of the Overview tab and the button for creating a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
+
+6. From the collection of templates, choose **Select** under **Real-time employee termination**.
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting a workflow template for real-time employee termination." lightbox="media/tutorial-lifecycle-workflows/select-template.png":::
+7. Configure basic information about the workflow, and then select **Next: Review tasks**.
-## Before you begin
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of the tab for basic workflow information." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png":::
-As part of the prerequisites for completing this tutorial, you need an account that has group and Teams memberships and that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
+8. Inspect the tasks if you want, but no additional configuration is needed. Select **Next: Select users** when you're finished.
-The leaver scenario can be broken down into the following:
-- **Prerequisite:** Create a user account that represents an employee leaving your organization-- **Prerequisite:** Prepare the user account with groups and Teams memberships-- Create the lifecycle management workflow-- Run the workflow on-demand-- Verify that the workflow was successfully executed
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-tasks.png" alt-text="Screenshot of the tab for reviewing template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-tasks.png":::
-## Create a workflow using leaver template
-Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination with Lifecycle workflows using the Azure portal.
+9. Choose the **Select users to run now** option. It allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on demand later at any time, as needed.
- 1. Sign in to Azure portal
- 2. On the right, select **Azure Active Directory**.
- 3. Select **Identity Governance**.
- 4. Select **Lifecycle workflows (Preview)**.
- 5. On the **Overview (Preview)** page, select **New workflow**.
- :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-users.png" alt-text="Screenshot of the option for selecting users to run now." lightbox="media/tutorial-lifecycle-workflows/real-time-users.png":::
- 6. From the templates, select **Select** under **Real-time employee termination**.
- :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting template leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-template.png":::
+10. Select **Add users** to designate the users for this workflow.
- 7. Next, you configure the basic information about the workflow. Select **Next:Review tasks** when you're done with this step.
- :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of review template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png":::
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of the button for adding users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png":::
- 8. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you're finished.
- :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-tasks.png" alt-text="Screenshot of template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-tasks.png":::
+11. A panel with the list of available users appears on the right side of the window. Choose **Select** when you're done with your selection.
- 9. For the user selection, select **Select users**. This allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on-demand later at any time as needed.
- :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-users.png" alt-text="Select real time leaver template users." lightbox="media/tutorial-lifecycle-workflows/real-time-users.png":::
-
- 10. Next, select on **+Add users** to designate the users to be executed on this workflow.
- :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of real time leaver add users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png":::
-
- 11. A panel with the list of available users pops up on the right side of the screen. Select **Select** when you're done with your selection.
- :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of real time leaver template selected users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png":::
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of a list of available users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png":::
- 12. Select **Next: Review and create** when you're satisfied with your selection.
- :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-review-users.png" alt-text="Screenshot of reviewing template users." lightbox="media/tutorial-lifecycle-workflows/real-time-review-users.png":::
+12. Select **Next: Review and create** when you're satisfied with your selection of users.
- 13. On the review blade, verify the information is correct and select **Create**.
- :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of creating real time leaver workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png":::
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-review-users.png" alt-text="Screenshot of added users." lightbox="media/tutorial-lifecycle-workflows/real-time-review-users.png":::
-## Run the workflow
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+13. Verify that the information is correct, and then select **Create**.
->[!NOTE]
->Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of the tab for reviewing workflow choices, along with the button for creating the workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png":::
-To run a workflow on-demand, for users using the Azure portal, do the following steps:
+## Run the workflow
+
+Now that you've created the workflow, it will automatically run every three hours. Lifecycle workflows check every three hours for users in the associated execution condition and execute the configured tasks for those users.
+
+To run the workflow immediately, you can use the on-demand feature.
+
+> [!NOTE]
+> You currently can't run a workflow on demand if it's set to **Disabled**. You need to set the workflow to **Enabled** to use the on-demand feature.
+
+To run a workflow on demand for users by using the Azure portal:
+
+1. On the workflow screen, select the specific workflow that you want to run.
+2. Select **Run on demand**.
+3. On the **Select users** tab, select **Add users**.
+4. Add users.
+5. Select **Run workflow**.
- 1. On the workflow screen, select the specific workflow you want to run.
- 2. Select **Run on demand**.
- 3. On the **select users** tab, select **add users**.
- 4. Add a user.
- 5. Select **Run workflow**.
-
## Check tasks and workflow status
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we look at the status using the user focused reports.
+At any time, you can monitor the status of workflows and tasks. Three data pivots, users runs, and tasks are currently available in public preview. You can learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In this tutorial, you check the status by using the user-focused reports.
+
+1. On the **Overview** page for the workflow, select **Workflow history (Preview)**.
- 1. To begin, select the **Workflow history (Preview)** tab to view the user summary and associated workflow tasks and statuses.
- :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-real-time.png" alt-text="Screenshot of real time history overview." lightbox="media/tutorial-lifecycle-workflows/workflow-history-real-time.png":::
+ :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-real-time.png" alt-text="Screenshot of the overview page for a workflow." lightbox="media/tutorial-lifecycle-workflows/workflow-history-real-time.png":::
-1. Once the **Workflow history (Preview)** tab has been selected, you land on the workflow history page as shown.
- :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-real-time.png" alt-text="Screenshot of real time workflow history." lightbox="media/tutorial-lifecycle-workflows/user-summary-real-time.png":::
+ The **Workflow history** page appears.
-1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith.
- :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-real-time.png" alt-text="Screenshot of total tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/total-tasks-real-time.png":::
+ :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-real-time.png" alt-text="Screenshot of real-time workflow history." lightbox="media/tutorial-lifecycle-workflows/user-summary-real-time.png":::
-1. To add an extra layer of granularity, you may select **Failed tasks** for the user Wade Warren to view the total number of failed tasks assigned to the user Wade Warren.
- :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png" alt-text="Screenshot of failed tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png":::
+1. Select **Total tasks** for a user to view the total number of tasks created and their statuses.
-1. Similarly, you may select **Unprocessed tasks** for the user Wade Warren to view the total number of unprocessed or canceled tasks assigned to the user Wade Warren.
- :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png" alt-text="Screenshot of unprocessed tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png":::
+ :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-real-time.png" alt-text="Screenshot of total tasks for a real-time workflow." lightbox="media/tutorial-lifecycle-workflows/total-tasks-real-time.png":::
+
+1. To add an extra layer of granularity, select **Failed tasks** for a user to view the total number of failed tasks assigned to that user.
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png" alt-text="Screenshot of failed tasks for a real-time workflow." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png":::
+
+1. Select **Unprocessed tasks** for a user to view the total number of unprocessed or canceled tasks assigned to that user.
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png" alt-text="Screenshot of unprocessed tasks for a real-time workflow." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png":::
## Next steps-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Complete employee offboarding tasks in real-time on their last day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow)+
+- [Prepare user accounts for lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Complete tasks in real time on an employee's last day of work by using lifecycle workflow APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow)
active-directory What Are Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md
Title: 'What are lifecycle workflows?'
-description: Describes overview of Lifecycle workflow feature.
+ Title: What are lifecycle workflows?
+description: Get an overview of the lifecycle workflow feature of Azure AD.
-# What are Lifecycle Workflows? (Public Preview)
+# What are lifecycle workflows (preview)?
-Lifecycle Workflows is a new Identity Governance service that enables organizations to manage Azure AD users by automating these three basic lifecycle processes:
+Lifecycle workflows (preview) are a new identity governance feature that enables organizations to manage Azure Active Directory (Azure AD) users by automating these three basic lifecycle processes:
-- Joiner - When an individual comes into scope of needing access. An example is a new employee joining a company or organization.-- Mover - When an individual moves between boundaries within an organization. This movement may require more access or authorization. An example would be a user who was in marketing is now a member of the sales organization.-- Leaver - When an individual leaves the scope of needing access, access may need to be removed. Examples would be an employee who is retiring or an employee who has been terminated.
+- **Joiner**: When an individual enters the scope of needing access. An example is a new employee joining a company or organization.
+- **Mover**: When an individual moves between boundaries within an organization. This movement might require more access or authorization. An example is a user who was in marketing and is now a member of the sales organization.
+- **Leaver**: When an individual leaves the scope of needing access. This movement might require the removal of access. Examples are an employee who's retiring or an employee who's terminated.
-Workflows contain specific processes, which run automatically against users as they move through their life cycle. Workflows are made up of [Tasks](lifecycle-workflow-tasks.md) and [Execution conditions](understanding-lifecycle-workflows.md#understanding-lifecycle-workflows).
+Workflows contain specific processes that run automatically against users as they move through their lifecycle. Workflows consist of [tasks](lifecycle-workflow-tasks.md) and [execution conditions](understanding-lifecycle-workflows.md#understanding-lifecycle-workflows).
-Tasks are specific actions that run automatically when a workflow is triggered. An Execution condition defines the 'Scope' of "who" and the 'Trigger' of "when" a workflow will be performed. For example, sending a manager an email 7 days before the value in the NewEmployeeHireDate attribute of new employees can be described as a workflow. It consists of:
- - Task: send email
- - When (trigger): Seven days before the NewEmployeeHireDate attribute value
- - Who (scope): new employees
+Tasks are specific actions that run automatically when a workflow is triggered. An execution condition defines the scope of who's affected and the trigger of when a workflow will be performed. For example, sending a manager an email seven days before the value in the `NewEmployeeHireDate` attribute of new employees can be described as a workflow. It consists of:
-Automatic workflow schedules [trigger](understanding-lifecycle-workflows.md#trigger-details) off of user attributes. Scoping of automatic workflows is possible using a wide range of user and extended attributes; such as the "department" that a user belongs to.
+- Task: Send email.
+- Who (scope): New employees.
+- When (trigger): Seven days before the `NewEmployeeHireDate` attribute value.
-Finally, Lifecycle Workflows can even [integrate with Logic Apps](lifecycle-workflow-extensibility.md) tasks ability to extend workflows for more complex scenarios using your existing Logic apps.
+An automatic workflow schedules a [trigger](understanding-lifecycle-workflows.md#trigger-details) based on user attributes. Scoping of automatic workflows is possible through a wide range of user and extended attributes, such as the department that a user belongs to.
+Lifecycle workflows can even [integrate with the ability of logic apps tasks to extend workflows](lifecycle-workflow-extensibility.md) for more complex scenarios through your existing logic apps.
- :::image type="content" source="media/what-are-lifecycle-workflows/intro-2.png" alt-text="Lifecycle Workflows diagram." lightbox="media/what-are-lifecycle-workflows/intro-2.png":::
+## Why to use lifecycle workflows
-## Why use Lifecycle workflows?
-Anyone who wants to modernize their identity lifecycle management process for employees, needs to ensure:
+Anyone who wants to modernize an identity lifecycle management process for employees needs to ensure:
- - **New employee on-boarding** - That when a user joins the organization, they're ready to go on day one. They have the correct access to the information, membership to groups, and applications they need.
- - **Employee retirement/terminations/off-boarding** - That users who are no longer tied to the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner.
- - **Easy to administer in my organization** - That there's a seamless process to accomplish the above tasks, that isn't overly burdensome or time consuming for Administrators.
- - **Robust troubleshooting/auditing/compliance** - That there's the ability to easily troubleshoot issues when they arise and that there's sufficient logging to help with this and compliance related issues.
+- That when users join the organization, they're ready to go on day one. They have the correct access to information, group memberships, and applications that they need.
+- That users who are no longer tied to the company for various reasons (termination, separation, leave of absence, or retirement) have their access revoked in a timely way.
+- That the process for providing or revoking access isn't overly burdensome or time consuming for administrators.
+- That administrators and employees can easily troubleshoot problems, and that logging is sufficient to help with troubleshooting, auditing, and compliance.
-The following are key reasons to use Lifecycle workflows.
-- **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks. -- **Centralize** your workflow process so you can easily create and manage workflows all in one location.-- Easily **troubleshoot** workflow scenarios with the Workflow history and Audit logs.-- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles are reduced.-- **Reduce** or remove manual tasks that were done in the past with automated lifecycle workflows.-- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps.
+Key reasons to use lifecycle workflows include:
+- Extend your HR-driven provisioning process with other workflows that simplify and automate tasks.
+- Centralize your workflow process so you can easily create and manage workflows in one location.
+- Easily troubleshoot workflow scenarios with the workflow history and audit logs.
+- Manage user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles decreases.
+- Reduce or remove manual tasks.
+- Apply logic apps to extend workflows for more complex scenarios with your existing logic apps.
-All of the above can help ensure a holistic experience by allowing you to remove other dependencies and applications to achieve the same result. Thus translating into, increased on-boarding and off-boarding efficiency.
+Those capabilities can help ensure a holistic experience by allowing you to remove other dependencies and applications to achieve the same result. You can then increase efficiency in new employee orientation and in removal of former employees from the system.
+## When to use lifecycle workflows
-## When to use Lifecycle Workflows
-You can use Lifecycle workflows to address any of the following conditions.
-- **Automating and extending user onboarding/HR provisioning** - Use Lifecycle workflows when you want to extend your HR provisioning scenarios by automating tasks such as generating temporary passwords and emailing managers. If you currently have a manual process for on-boarding, use Lifecycle workflows as part of an automated process.-- **Automate group membership**: When groups in your organization are well-defined, you can automate user membership of these groups. Some of the benefits and differences from dynamic groups include:
- - LCW manages static groups, where a dynamic group rule isn't needed
- - No need to have one rule per group ΓÇô the LCW rule determines the set/scope of users to execute workflows against not which group
- - LCW helps manage users ΓÇÿ lifecycle beyond attributes supported in dynamic groups ΓÇô for example, ΓÇÿXΓÇÖ days before the employeeHireDate
- - LCW can perform actions on the group not just the membership.
-- **Workflow history and auditing** Use Lifecycle workflows when you need to create an audit trail of user lifecycle processes. Using the portal you can view history and audits for on-boarding and off-boarding scenarios.-- **Automate user account management**: Making sure users who are leaving have their access to resources revoked is a key part of the identity lifecycle process. Lifecycle Workflows allow you to automate the disabling and removal of user accounts.-- **Integrate with Logic Apps**: Ability to apply logic apps to extend workflows for more complex scenarios using your existing Logic apps.
+You can use lifecycle workflows to address any of the following conditions:
+
+- **Automating and extending user orientation and HR provisioning**: Use lifecycle workflows when you want to extend your HR provisioning scenarios by automating tasks such as generating temporary passwords and emailing managers. If you currently have a manual process for orientation, use lifecycle workflows as part of an automated process.
+- **Automating group membership**: When groups in your organization are well defined, you can automate user membership in those groups. Benefits and differences from dynamic groups include:
+ - Lifecycle workflows manage static groups, where you don't need a dynamic group rule.
+ - There's no need to have one rule per group. Lifecycle workflow rules determine the scope of users to execute workflows against, not which group.
+ - Lifecycle workflows help manage users' lifecycle beyond attributes supported in dynamic groups--for example, a certain number of days before the `NewEmployeeHireDate` attribute value.
+ - Lifecycle workflows can perform actions on the group, not just the membership.
+- **Workflow history and auditing**: Use lifecycle workflows when you need to create an audit trail of user lifecycle processes. By using the Azure portal, you can view history and audits for orientation and departure scenarios.
+- **Automating user account management**: A key part of the identity lifecycle process is making sure that users who are leaving have their access to resources revoked. You can use lifecycle workflows to automate the disabling and removal of user accounts.
+- **Integrating with logic apps**: You can apply logic apps to extend workflows for more complex scenarios.
## License requirements [!INCLUDE [Azure AD Premium P2 license](../../../includes/lifecycle-workflows-license.md)]
+During this preview, you can:
-### How many licenses must you have?
-
-To preview the Lifecycle Workflows feature, you must have an Azure AD Premium P2 license in your tenant. During this preview, you're able to:
-
- Create, manage, and delete workflows up to the total limit of 50 workflows. - Trigger on-demand and scheduled workflow execution. - Manage and configure existing tasks to create workflows that are specific to your needs. - Create up to 100 custom task extensions to be used in your workflows.
-
- ## Next steps-- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)-- [Create a Lifecycle workflow](create-lifecycle-workflow.md)+
+- [Create a custom workflow by using the Azure portal](tutorial-onboard-custom-workflow-portal.md)
+- [Create a lifecycle workflow](create-lifecycle-workflow.md)
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
Previously updated : 04/14/2022 Last updated : 04/14/2023
It is recommended that you use a non-production environment to test the steps in
To configure OIDC-based SSO, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- One of the following roles: Global Administrator, or owner of the service principal.
## Add the application
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
To enable the admin consent workflow and choose reviewers:
1. Select **Save**. It can take up to an hour for the workflow to become enabled. > [!NOTE]
-> You can add or remove reviewers for this workflow by modifying the **Who can review admin consent requests** list. A current limitation of this feature is that a reviewer retains the ability to review requests that were made while they were designated as a reviewer. Additionally, new reviewers will not be assigned to requests that were created before they were set as a reviewer.
+> You can add or remove reviewers for this workflow by modifying the **Who can review admin consent requests** list. A current limitation of this feature is that a reviewer retains the ability to review requests that were made while they were designated as a reviewer and will receive expiration reminder emails for those requests after they're removed from the reviewers list. Additionally, new reviewers will not be assigned to requests that were created before they were set as a reviewer.
## Configure the admin consent workflow using Microsoft Graph
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Previously updated : 11/07/2022 Last updated : 04/14/2023
To grant tenant-wide admin consent to an app listed in **Enterprise applications
1. Select **Azure Active Directory**, and then select **Enterprise applications**. 1. Select the application to which you want to grant tenant-wide admin consent, and then select **Permissions**. :::image type="content" source="media/grant-tenant-wide-admin-consent/grant-tenant-wide-admin-consent.png" alt-text="Screenshot shows how to grant tenant-wide admin consent.":::-
-1. Add the redirect **URI** (https://entra.microsoft.com/TokenAuthorize) as permitted redirect **URI** to the app.
1. Carefully review the permissions that the application requires. If you agree with the permissions the application requires, select **Grant admin consent**. ## Grant admin consent in App registrations
active-directory Cross Tenant Synchronization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md
Previously updated : 03/08/2023 Last updated : 04/15/2023
These steps describe how to use Microsoft Graph Explorer (recommended), but you
1. In the target tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the target tenant and the source tenant. Use the source tenant ID in the request.
+ If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing configuration. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error).
+ **Request** ```http
These steps describe how to use Microsoft Graph Explorer (recommended), but you
1. Use the [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?view=graph-rest-beta&preserve-view=true) API to enable user synchronization in the target tenant.
+ If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing policy. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error).
+ **Request** ```http
These steps describe how to use Microsoft Graph Explorer (recommended), but you
1. In the source tenant, use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the source tenant and the target tenant. Use the target tenant ID in the request.
+ If you get an `Request_MultipleObjectsWithSameKeyValue` error, you might already have an existing configuration. For more information, see [Symptom - Request_MultipleObjectsWithSameKeyValue error](#symptomrequest_multipleobjectswithsamekeyvalue-error).
+ **Request** ```http
Either the signed-in user doesn't have sufficient privileges, or you need to con
2. In [Microsoft Graph Explorer tool](https://aka.ms/ge), make sure you consent to the required permissions. See [Step 1: Sign in to tenants and consent to permissions](#step-1-sign-in-to-tenants-and-consent-to-permissions) earlier in this article.
+#### Symptom - Request_MultipleObjectsWithSameKeyValue error
+
+When you try to make a Graph API call, you receive an error message similar to the following:
+
+```
+code: Request_MultipleObjectsWithSameKeyValue
+message: Another object with the same value for property tenantId already exists.
+message: A conflicting object with one or more of the specified property values is present in the directory.
+```
+
+**Cause**
+
+You are likely trying to create a configuration or object that already exists, possibly from a previous configuration.
+
+**Solution**
+
+1. Verify your request syntax and that you are using the correct tenant ID.
+
+1. Make a `GET` request to list the existing object.
+
+1. If you have an existing object, instead of making a create request using `POST` or `PUT`, you might need to make an update request using `PATCH`, such as:
+
+ - [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true)
+ - [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true)
+
+#### Symptom - Directory_ObjectNotFound error
+
+When you try to make a Graph API call, you receive an error message similar to the following:
+
+```
+code: Directory_ObjectNotFound
+message: Unable to read the company information from the directory.
+```
+
+**Cause**
+
+You are likely trying to update an object that doesn't exist using `PATCH`.
+
+**Solution**
+
+1. Verify your request syntax and that you are using the correct tenant ID.
+
+1. Make a `GET` request to verify the object doesn't exist.
+
+1. If object doesn't exist, instead of making an update request using `PATCH`, you might need to make a create request using `POST` or `PUT`, such as:
+
+ - [Create identitySynchronization](/graph/api/crosstenantaccesspolicyconfigurationpartner-put-identitysynchronization?view=graph-rest-beta&preserve-view=true)
+ ## Next steps - [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview?view=graph-rest-beta&preserve-view=true)
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
na Previously updated : 01/12/2023 Last updated : 4/12/2023
When a membership or ownership is assigned, the assignment:
## Assign an owner or member of a group
-Follow these steps to make a user eligible member or owner of a group. You will need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group.
+Follow these steps to make a user eligible member or owner of a group. You will need permissions to manage groups. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role-assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not administrative unit level).
+
+> [!NOTE]
+> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
1. [Sign in to the Azure portal](https://portal.azure.com).
Follow these steps to make a user eligible member or owner of a group. You will
## Update or remove an existing role assignment
-Follow these steps to update or remove an existing role assignment. You will need to have Global Administrator, Privileged Role Administrator role, or Owner role of the group.
+Follow these steps to update or remove an existing role assignment. You will need permissions to manage groups. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role-assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not administrative unit level).
+
+> [!NOTE]
+> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
1. [Sign in to the Azure portal](https://portal.azure.com) with appropriate role permissions.
active-directory Groups Discover Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md
na Previously updated : 01/12/2023 Last updated : 4/12/2023
Before you will start, you need an Azure AD Security group or Microsoft 365 grou
Dynamic groups and groups synchronized from on-premises environment cannot be managed in PIM for Groups.
-You should either be a group Owner, have Global Administrator role, or Privileged Role Administrator role to bring the group under management with PIM.
+You need appropriate permissions to bring groups in Azure AD PIM. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role-assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not administrative unit level).
+
+> [!NOTE]
+> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
1. [Sign in to the Azure portal](https://portal.azure.com).
active-directory Groups Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md
na Previously updated : 01/12/2023 Last updated : 4/12/2023
Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part
## Who can extend and renew
-Only Global Administrators, Privileged Role Administrators, or group owners can extend or renew group membership/ownership time-bound assignments. The affected user or group can request to extend assignments that are about to expire and request to renew assignments that are already expired.
+Only users with permissions to manage groups can extend or renew group membership or ownership time-bound assignments. The affected user or group can request to extend assignments that are about to expire and request to renew assignments that are already expired.
+
+Role-assignable groups can be managed by Global Administrator, Privileged Role Administrator, or Owner of the group. Non-role-assignable groups can be managed by Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator, or Owner of the group. Role assignments for administrators should be scoped at directory level (not Administrative Unit level).
+
+> [!NOTE]
+> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
## When notifications are sent
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md
na Previously updated : 01/27/2023 Last updated : 4/12/2023
In Privileged Identity Management (PIM) for groups in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define membership or ownership assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, etc. Use the following steps to configure role settings and set up the approval workflow to specify who can approve or deny requests to elevate privilege.
-You need to have Global Administrator, Privileged Role Administrator, or group Owner permissions to manage settings for membership or ownership assignments of the group. Role settings are defined per role per group: all assignments for the same role (member or owner) for the same group follow same role settings. Role settings of one group are independent from role settings of another group. Role settings for one role (member) are independent from role settings for another role (owner).
+You will need group management permissions to manage settings. For role-assignable groups, you need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group. For non-role assignable groups, you need to have Global Administrator, Directory Writer, Groups Administrator, Identity Governance Administrator, User Administrator role, or be an Owner of the group. Role assignments for administrators should be scoped at directory level (not Administrative Unit level).
+
+> [!NOTE]
+> Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM.
+
+Role settings are defined per role per group: all assignments for the same role (member or owner) for the same group follow same role settings. Role settings of one group are independent from role settings of another group. Role settings for one role (member) are independent from role settings for another role (owner).
## Update role settings
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
[Microsoft 365 service health](/microsoft-365/enterprise/view-service-health) | View the health of Microsoft 365 services [Smart lockout](../authentication/howto-password-smart-lockout.md) | Define the threshold and duration for lockouts when failed sign-in events happen. [Password Protection](../authentication/concept-password-ban-bad.md) | Configure custom banned password list or on-premises password protection.
+[Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) | Configure cross-tenant access settings for users in another tenant. Security Administrators can't directly create and delete users, but can indirectly create and delete synchronized users from another tenant when both tenants are configured for cross-tenant synchronization, which is a privileged permission.
> [!div class="mx-tableFixed"] > | Actions | Description |
active-directory Howspace Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/howspace-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A user account in Howspace with Admin permissions.
+* A Howspace subscription with single sign-on and SCIM features enabled.
+* A user account in Howspace with Main User Dashboard privileges.
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Howspace](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Howspace to support provisioning with Azure AD
-Contact Howspace support to configure Howspace to support provisioning with Azure AD.
+### Single sign-on configuration
+1. Sign in to the Howspace Main User Dashboard, then select **Settings** from the menu.
+1. In the settings list, select **single sign-on**.
+
+ ![Screenshot of the single sign-on section in the settings list.](media/howspace-provisioning-tutorial/settings-sso.png)
+
+1. Click the **Add SSO configuration** button.
+
+ ![Screenshot of the Add SSO configuration menu in the single sign-on section.](media/howspace-provisioning-tutorial/settings-sso-2.png)
+
+1. Select either **Azure Active Directory (Multi-Tenant)** or **Azure Active Directory** based on your organization's Azure AD topology.
+
+ ![Screenshot of the Azure Active Directory (Multi-Tenant) dialog.](media/howspace-provisioning-tutorial/settings-azure-ad-multi-tenant.png)
+ ![Screenshot of the Azure Active Directory dialog.](media/howspace-provisioning-tutorial/settings-azure-ad-single-tenant.png)
+
+1. Enter your Azure AD Tenant ID, and click **OK** to save the configuration.
+
+### Provisioning configuration
+1. In the settings list, select **System for Cross-domain Identity Management**.
+
+ ![Screenshot of the System for Cross-domain Identity Management section in the settings list.](media/howspace-provisioning-tutorial/settings-scim.png)
+
+1. Check the **Enable user synchronization** checkbox.
+1. Copy the Tenant URL and Secret Token for later use in Azure AD.
+1. Click **Save** to save the configuration.
+
+### Main user dashboard access control configuration
+1. In the settings list, select **Main User Dashboard Access Control**
+
+ ![Screenshot of the Main User Dashboard Access Control section in the settings list.](media/howspace-provisioning-tutorial/settings-access-control.png)
+
+1. Check the **Enable single sign-on for main users** checkbox.
+1. Select the SSO configuration you created in the previous step.
+1. Enter the object IDs of the Azure AD user groups that should have access to the Main User Dashboard to the **Limit to following user groups** field. You can specify multiple groups by separating the object IDs with a comma.
+1. Click **Save** to save the configuration.
+
+### Workspace default access control configuration
+1. In the settings list, select **Workspace default settings**
+
+ ![Screenshot of the Workspace default settings in the settings list.](media/howspace-provisioning-tutorial/settings-workspace-default.png)
+
+1. In the Workspace default settings list, select **Login, registration and SSO**
+
+ ![Screenshot of the Login, registration and SSO section in the Workspace default settings list.](media/howspace-provisioning-tutorial/settings-workspace-sso.png)
+
+1. Check the **Users can login using single sign-on** checkbox.
+1. Select the SSO configuration you created in the previous step.
+1. Enter the object IDs of the Azure AD user groups that should have access to workspaces to the **Limit to following user groups** field. You can specify multiple groups by separating the object IDs with a comma.
+1. You can modify the user groups for each workspace individually after creating the workspace.
## Step 3. Add Howspace from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
|active|Boolean|| |name.givenName|String|| |name.familyName|String||
- |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
|externalId|String|| 1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Howspace**.
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
The following output example resembles successful creation of the resource group
To install the aks-preview extension, run the following command:
-```azurecli
+```azurecli-interactive
az extension add --name aks-preview ``` Run the following command to update to the latest version of the extension released:
-```azurecli
+```azurecli-interactive
az extension update --name aks-preview ```
After a few minutes, the command completes and returns JSON-formatted informatio
To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster and `-g`, the resource group name:
-```bash
+```azurecli-interactive
export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)" ```
export FICID="fic-test-fic-name"
Use the Azure CLI [az keyvault create][az-keyvault-create] command to create a Key Vault in the resource group created earlier.
-```azurecli
+```azurecli-interactive
az keyvault create --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --name "${KEYVAULT_NAME}" ```
At this point, your Azure account is the only one authorized to perform any oper
To add a secret to the vault, you need to run the Azure CLI [az keyvault secret set][az-keyvault-secret-set] command to create it. The password is the value you specified for the environment variable `KEYVAULT_SECRET_NAME` and stores the value of **Hello!** in it.
-```azurecli
+```azurecli-interactive
az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET_NAME}" --value 'Hello!' ```
export KEYVAULT_URL="$(az keyvault show -g ${RESOURCE_GROUP} -n ${KEYVAULT_NAME}
Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity.
-```azurecli
+```azurecli-interactive
az account set --subscription "${SUBSCRIPTION}" ```
-```azurecli
+```azurecli-interactive
az identity create --name "${UAID}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}" ``` Next, you need to set an access policy for the managed identity to access the Key Vault secret by running the following commands:
-```bash
+```azurecli-interactive
export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${UAID}" --query 'clientId' -otsv)" ```
-```azurecli
+```azurecli-interactive
az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}" ```
az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn
Create a Kubernetes service account and annotate it with the client ID of the Managed Identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the default value for the cluster name and the resource group name.
-```azurecli
+```azurecli-interactive
az aks get-credentials -n myAKSCluster -g "${RESOURCE_GROUP}" ```
Serviceaccount/workload-identity-sa created
Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject.
-```azurecli
+```azurecli-interactive
az identity federated-credential create --name ${FICID} --identity-name ${UAID} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} ```
kubectl delete pod quick-start
kubectl delete sa "${SERVICE_ACCOUNT_NAME}" --namespace "${SERVICE_ACCOUNT_NAMESPACE}" ```
-```azurecli
+```azurecli-interactive
az group delete --name "${RESOURCE_GROUP}" ```
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-identity.md
description: Learn the cluster operator best practices for how to manage authentication and authorization for clusters in Azure Kubernetes Service (AKS) Previously updated : 09/29/2022 Last updated : 04/14/2023 # Best practices for authentication and authorization in Azure Kubernetes Service (AKS)
There are two levels of access needed to fully operate an AKS cluster:
## Use pod-managed identities
-> **Best practice guidance**
->
-> Don't use fixed credentials within pods or container images, as they are at risk of exposure or abuse. Instead, use *pod identities* to automatically request access using Azure AD.
+Don't use fixed credentials within pods or container images, as they are at risk of exposure or abuse. Instead, use *pod identities* to automatically request access using Azure AD.
> [!NOTE]
-> Pod identities are intended for use with Linux pods and container images only. Pod-managed identities support for Windows containers is coming soon.
+> Pod identities are intended for use with Linux pods and container images only. Pod-managed identities (preview) support for Windows containers is coming soon.
To access other Azure resources, like Azure Cosmos DB, Key Vault, or Blob storage, the pod needs authentication credentials. You could define authentication credentials with the container image or inject them as a Kubernetes secret. Either way, you would need to manually create and assign them. Usually, these credentials are reused across pods and aren't regularly rotated.
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
This article shows you how to create a static public IP address and assign it to
## Create a static IP address
-1. Use the `az aks show`[az-aks-show] command to get the node resource group name of your AKS cluster, which follows this format: `MC_<resource group name>_<AKS cluster name>_<region>`.
+1. Create a resource group for your IP address
```azurecli-interactive
- az aks show \
- --resource-group myResourceGroup \
- --name myAKSCluster
- --query nodeResourceGroup
- --output tsv
+ az group create --name myNetworkResourceGroup
```
-2. Use the [`az network public ip create`][az-network-public-ip-create] command to create a static public IP address. The following example creates a static IP resource named *myAKSPublicIP* in the *MC_myResourceGroup_myAKSCluster_eastus* node resource group.
+2. Use the [`az network public ip create`][az-network-public-ip-create] command to create a static public IP address. The following example creates a static IP resource named *myAKSPublicIP* in the *myNetworkResourceGroup* resource group.
```azurecli-interactive az network public-ip create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --resource-group myNetworkResourceGroup \
--name myAKSPublicIP \ --sku Standard \ --allocation-method static
This article shows you how to create a static public IP address and assign it to
3. After you create the static public IP address, use the [`az network public-ip list`][az-network-public-ip-list] command to get the IP address. Specify the name of the node resource group and public IP address you created, and query for the *ipAddress*. ```azurecli-interactive
- az network public-ip show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --query ipAddress --output tsv
+ az network public-ip show --resource-group myNetworkResourceGroup --name myAKSPublicIP --query ipAddress --output tsv
``` ## Create a service using the static IP address
This article shows you how to create a static public IP address and assign it to
1. Before creating a service, use the [`az role assignment create`][az-role-assignment-create] command to ensure the cluster identity used by the AKS cluster has delegated permissions to the node resource group. ```azurecli-interactive
+ CLIENT_ID=$(az aks show --name <cluster name> --resource-group <cluster resource group> --query identity.principalId -o tsv)
+ RG_SCOPE=$(az group show --name myNetworkResourceGroup --query id -o tsv)
az role assignment create \
- --assignee <Client ID> \
+ --assignee ${CLIENT_ID} \
--role "Network Contributor" \
- --scope /subscriptions/<subscription id>/resourceGroups/<MC_myResourceGroup_myAKSCluster_eastus>
+ --scope ${RG_SCOPE}
``` > [!IMPORTANT]
This article shows you how to create a static public IP address and assign it to
kind: Service metadata: annotations:
- service.beta.kubernetes.io/azure-load-balancer-resource-group: MC_myResourceGroup_myAKSCluster_eastus
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup
name: azure-load-balancer spec: loadBalancerIP: 40.121.183.52
analysis-services Analysis Services Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-terraform.md
Title: 'Quickstart: Create an Azure Analysis Services server using Terraform'
description: 'In this article, you create an Azure Analysis Services server using Terraform' Previously updated : 3/10/2023- Last updated : 4/14/2023+
api-management Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-terraform.md
Title: 'Quickstart: Create an Azure API Management service using Terraform'
description: 'In this article, you create an Azure API Management service using Terraform.' Previously updated : 3/13/2023- Last updated : 4/14/2023+
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
Publish-AzWebApp -ResourceGroupName Default-Web-WestUS -Name MyApp -ArchivePath
The following example uses the cURL tool to deploy a ZIP package. Replace the placeholders `<username>`, `<zip-package-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash
-curl -X POST -u <username:password> --data-binary "@<zip-package-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=zip
+curl -X POST -u <username:password> -T "@<zip-package-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=zip
``` [!INCLUDE [deploying to network secured sites](../../includes/app-service-deploy-network-secured-sites.md)]
Publish-AzWebapp -ResourceGroupName <group-name> -Name <app-name> -ArchivePath <
The following example uses the cURL tool to deploy a .war, .jar, or .ear file. Replace the placeholders `<username>`, `<file-path>`, `<app-name>`, and `<package-type>` (`war`, `jar`, or `ear`, accordingly). When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash
-curl -X POST -u <username> --data-binary @"<file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=<package-type>
+curl -X POST -u <username> -T @"<file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=<package-type>
``` [!INCLUDE [deploying to network secured sites](../../includes/app-service-deploy-network-secured-sites.md)]
Not supported. See Azure CLI or Kudu API.
The following example uses the cURL tool to deploy a startup file for their application.Replace the placeholders `<username>`, `<startup-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash
-curl -X POST -u <username> --data-binary @"<startup-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=startup
+curl -X POST -u <username> -T @"<startup-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=startup
``` ### Deploy a library file
curl -X POST -u <username> --data-binary @"<startup-file-path>" https://<app-nam
The following example uses the cURL tool to deploy a library file for their application. Replace the placeholders `<username>`, `<lib-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash
-curl -X POST -u <username> --data-binary @"<lib-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=lib&path="/home/site/deployments/tools/my-lib.jar"
+curl -X POST -u <username> -T @"<lib-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=lib&path="/home/site/deployments/tools/my-lib.jar"
``` ### Deploy a static file
curl -X POST -u <username> --data-binary @"<lib-file-path>" https://<app-name>.s
The following example uses the cURL tool to deploy a config file for their application. Replace the placeholders `<username>`, `<config-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md). ```bash
-curl -X POST -u <username> --data-binary @"<config-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=static&path="/home/site/deployments/tools/my-config.json"
+curl -X POST -u <username> -T @"<config-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish?type=static&path="/home/site/deployments/tools/my-config.json"
``` # [Kudu UI](#tab/kudu-ui)
azure-app-configuration Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/cli-samples.md
Title: Azure CLI samples - Azure App Configuration description: Information about sample scripts provided for Azure App Configuration--++ Last updated 08/09/2022
azure-app-configuration Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-customer-managed-keys.md
Title: Use customer-managed keys to encrypt your configuration data description: Encrypt your configuration data using customer-managed keys--++ Last updated 08/30/2022
azure-app-configuration Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md
Title: Azure App Configuration resiliency and disaster recovery description: Lean how to implement resiliency and disaster recovery with Azure App Configuration.--++ Last updated 07/09/2020
azure-app-configuration Concept Enable Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-enable-rbac.md
Title: Authorize access to Azure App Configuration using Azure Active Directory description: Enable Azure RBAC to authorize access to your Azure App Configuration instance--++ Last updated 05/26/2020
azure-app-configuration Concept Feature Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-feature-management.md
Title: Understand feature management using Azure App Configuration description: Turn features on and off using Azure App Configuration --++
azure-app-configuration Concept Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md
Title: Geo-replication in Azure App Configuration description: Details of the geo-replication feature in Azure App Configuration. --++
azure-app-configuration Concept Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-github-action.md
Title: Sync your GitHub repository to App Configuration description: Use GitHub Actions to automatically update your App Configuration instance when you update your GitHub repository.--++ Last updated 05/28/2020
azure-app-configuration Concept Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-key-value.md
Title: Understand Azure App Configuration key-value store description: Understand key-value storage in Azure App Configuration, which stores configuration data as key-values. Key-values are a representation of application settings.--++ Last updated 09/14/2022
azure-app-configuration Concept Point Time Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-point-time-snapshot.md
Title: Retrieve key-values from a point-in-time
description: Retrieve old key-value pairs using point-in-time snapshots in Azure App Configuration, which maintains a record of changes to key-values. --++ Last updated 03/14/2022
azure-app-configuration Concept Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-private-endpoint.md
Title: Using private endpoints for Azure App Configuration description: Secure your App Configuration store using private endpoints --++ Last updated 07/15/2020
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
description: In this tutorial, you learn how to dynamically update the configuration data for .NET Core apps documentationcenter: ''-+ editor: ''
ms.devlang: csharp
Last updated 07/01/2019-+ #Customer intent: I want to dynamically update my app to use the latest configuration data in App Configuration.
azure-app-configuration Enable Dynamic Configuration Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet.md
Title: '.NET Framework Tutorial: dynamic configuration in Azure App Configuration' description: In this tutorial, you learn how to dynamically update the configuration data for .NET Framework apps using Azure App Configuration. -+ ms.devlang: csharp Last updated 03/20/2023-+ #Customer intent: I want to dynamically update my .NET Framework app to use the latest configuration data in App Configuration.
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
Title: Use Event Grid for App Configuration data change notifications description: Learn how to use Azure App Configuration event subscriptions to send key-value modification events to a web endpoint -+ ms.assetid: ms.devlang: csharp Last updated 03/04/2020-+
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
Title: Azure App Configuration best practices | Microsoft Docs
description: Learn best practices while using Azure App Configuration. Topics covered include key groupings, key-value compositions, App Configuration bootstrap, and more. documentationcenter: ''-+ editor: '' ms.assetid: Last updated 09/21/2022-+
azure-app-configuration Howto Disable Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-disable-public-access.md
Title: How to disable public access in Azure App Configuration description: How to disable public access to your Azure App Configuration store.--++ Last updated 07/12/2022
azure-app-configuration Howto Feature Filters Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md
description: Learn how to use feature filters to enable conditional feature flag
ms.devlang: csharp --++ Last updated 3/9/2020
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
Title: Import or export data with Azure App Configuration description: Learn how to import or export configuration data to or from Azure App Configuration. Exchange data between your App Configuration store and code project. -+ Last updated 08/24/2022-+ # Import or export configuration data
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
Title: Use managed identities to access App Configuration description: Authenticate to Azure App Configuration using managed identities--++
azure-app-configuration Howto Labels Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-labels-aspnet-core.md
description: This article describes how to use labels to retrieve app configuration values for the environment in which the app is currently running. ms.devlang: csharp-+ Last updated 3/12/2020-+ # Use labels to provide per-environment configuration values.
azure-app-configuration Howto Move Resource Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-move-resource-between-regions.md
Title: Move an App Configuration store to another region description: Learn how to move an App Configuration store to a different region. --++ Last updated 03/27/2023
azure-app-configuration Howto Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-set-up-private-access.md
Title: How to set up private access to an Azure App Configuration store description: How to set up private access to an Azure App Configuration store in the Azure portal and in the CLI.--++ Last updated 07/12/2022
azure-app-configuration Howto Targetingfilter Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-targetingfilter-aspnet-core.md
description: Learn how to enable staged rollout of features for targeted audiences ms.devlang: csharp--++ Last updated 11/20/2020
azure-app-configuration Integrate Ci Cd Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-ci-cd-pipeline.md
Title: Integrate Azure App Configuration using a continuous integration and delivery pipeline description: Learn to implement continuous integration and delivery using Azure App Configuration -+ Last updated 08/30/2022-+ # Customer intent: I want to use Azure App Configuration data in my CI/CD pipeline.
azure-app-configuration Integrate Kubernetes Deployment Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-kubernetes-deployment-helm.md
Title: Integrate Azure App Configuration with Kubernetes Deployment using Helm description: Learn how to use dynamic configurations in Kubernetes deployment with Helm. -+ Last updated 03/27/2023-+ #Customer intent: I want to use Azure App Configuration data in Kubernetes deployment with Helm.
azure-app-configuration Manage Feature Flags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/manage-feature-flags.md
description: In this tutorial, you learn how to manage feature flags separately from your application by using Azure App Configuration. documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 04/05/2022-+ #Customer intent: I want to control feature availability in my app by using App Configuration.
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md
Title: Monitoring Azure App Configuration data reference description: Important Reference material needed when you monitor App Configuration --++ Last updated 05/05/2021
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md
Title: Monitor Azure App Configuration description: Start here to learn how to monitor App Configuration --++ Last updated 05/05/2021
azure-app-configuration Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview-managed-identity.md
Title: Configure managed identities with Azure App Configuration description: Learn how managed identities work in Azure App Configuration and how to configure a managed identity-+ Last updated 02/25/2020-+
azure-app-configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview.md
Title: What is Azure App Configuration? description: Read an overview of the Azure App Configuration service. Understand why you would want to use App Configuration, and learn how you can use it.--++ Last updated 03/20/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration
description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 02/21/2023 --++
azure-app-configuration Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/powershell-samples.md
Last updated 01/19/2023--++ # PowerShell samples for Azure App Configuration
azure-app-configuration Pull Key Value Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
Title: Pull settings to App Configuration with Azure Pipelines description: Learn to use Azure Pipelines to pull key-values to an App Configuration Store -+ Last updated 11/17/2020-+ # Pull settings to App Configuration with Azure Pipelines
azure-app-configuration Push Kv Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-devops-pipeline.md
Title: Push settings to App Configuration with Azure Pipelines description: Learn to use Azure Pipelines to push key-values to an App Configuration Store -+ Last updated 02/23/2021-+ # Push settings to App Configuration with Azure Pipelines
azure-app-configuration Quickstart Azure App Configuration Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-app-configuration-create.md
Title: "Quickstart: Create an Azure App Configuration store"--++ description: "In this quickstart, learn how to create an App Configuration store." ms.devlang: csharp
azure-app-configuration Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-bicep.md
Title: Create an Azure App Configuration store using Bicep description: Learn how to create an Azure App Configuration store using Bicep.--++ Last updated 05/06/2022
azure-app-configuration Quickstart Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-container-apps.md
Title: "Quickstart: Use Azure App Configuration in Azure Container Apps" description: Learn how to connect a containerized application to Azure App Configuration, using Service Connector. -+ Last updated 03/02/2023-+
azure-app-configuration Quickstart Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-app.md
Title: Quickstart for Azure App Configuration with .NET Framework | Microsoft Do
description: In this article, create a .NET Framework app with Azure App Configuration to centralize storage and management of application settings separate from your code. documentationcenter: ''-+ ms.devlang: csharp Last updated 02/28/2023-+ #Customer intent: As a .NET Framework developer, I want to manage all my app settings in one place. # Quickstart: Create a .NET Framework app with Azure App Configuration
azure-app-configuration Quickstart Dotnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-core-app.md
Title: Quickstart for Azure App Configuration with .NET Core | Microsoft Docs description: In this quickstart, create a .NET Core app with Azure App Configuration to centralize storage and management of application settings separate from your code. -+ ms.devlang: csharp Last updated 03/20/2023-+ #Customer intent: As a .NET Core developer, I want to manage all my app settings in one place. # Quickstart: Create a .NET Core app with App Configuration
azure-app-configuration Quickstart Feature Flag Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md
Title: Quickstart for adding feature flags to Azure Functions | Microsoft Docs description: In this quickstart, use Azure Functions with feature flags from Azure App Configuration and test the function locally. -+ ms.devlang: csharp Last updated 3/20/2023-+ # Quickstart: Add feature flags to an Azure Functions app
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
Title: Quickstart for adding feature flags to .NET Framework apps | Microsoft Do
description: A quickstart for adding feature flags to .NET Framework apps and managing them in Azure App Configuration documentationcenter: ''-+ editor: '' ms.assetid:
.NET Last updated 3/20/2023-+ #Customer intent: As a .NET Framework developer, I want to use feature flags to control feature availability quickly and confidently. # Quickstart: Add feature flags to a .NET Framework app
azure-app-configuration Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-javascript.md
Title: Quickstart for using Azure App Configuration with JavaScript apps | Microsoft Docs description: In this quickstart, create a Node.js app with Azure App Configuration to centralize storage and management of application settings separate from your code. -+ ms.devlang: javascript Last updated 03/20/2023-+ #Customer intent: As a JavaScript developer, I want to manage all my app settings in one place. # Quickstart: Create a JavaScript app with Azure App Configuration
azure-app-configuration Quickstart Python Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python-provider.md
Title: Quickstart for using Azure App Configuration with Python apps | Microsoft Learn description: In this quickstart, create a Python app with the Azure App Configuration to centralize storage and management of application settings separate from your code. -+ ms.devlang: python Last updated 03/20/2023-+ #Customer intent: As a Python developer, I want to manage all my app settings in one place. # Quickstart: Create a Python app with Azure App Configuration
azure-app-configuration Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python.md
Title: Using Azure App Configuration in Python apps with the Azure SDK for Python | Microsoft Learn description: This document shows examples of how to use the Azure SDK for Python to access your data in Azure App Configuration. -+ ms.devlang: python Last updated 11/17/2022-+ #Customer intent: As a Python developer, I want to use the Azure SDK for Python to access my data in Azure App Configuration. # Create a Python app with the Azure SDK for Python
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-resource-manager.md
Title: Create an Azure App Configuration store by using Azure Resource Manager template (ARM template) description: Learn how to create an Azure App Configuration store by using Azure Resource Manager template (ARM template).--++ Last updated 06/09/2021
azure-app-configuration Rest Api Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-azure-ad.md
Title: Azure Active Directory REST API - authentication description: Use Azure Active Directory to authenticate to Azure App Configuration by using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authentication Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-hmac.md
Title: Azure App Configuration REST API - HMAC authentication description: Use HMAC to authenticate to Azure App Configuration by using the REST API--++ ms.devlang: csharp, golang, java, javascript, powershell, python
azure-app-configuration Rest Api Authentication Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-index.md
 Title: Azure App Configuration REST API - Authentication description: Reference pages for authentication using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-azure-ad.md
Title: Azure App Configuration REST API - Azure Active Directory authorization description: Use Azure Active Directory for authorization against Azure App Configuration by using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-hmac.md
Title: Azure App Configuration REST API - HMAC authorization description: Use HMAC for authorization against Azure App Configuration using the REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Authorization Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-index.md
 Title: Azure App Configuration REST API - Authorization description: Reference pages for authorization using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-consistency.md
 Title: Azure App Configuration REST API - consistency description: Reference pages for ensuring real-time consistency by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Fiddler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-fiddler.md
 Title: Azure Active Directory REST API - Test Using Fiddler description: Use Fiddler to test the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-headers.md
Title: Azure App Configuration REST API - Headers description: Reference pages for headers used with the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-key-value.md
 Title: Azure App Configuration REST API - key-value description: Reference pages for working with key-values by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-keys.md
Title: Azure App Configuration REST API - Keys description: Reference pages for working with keys using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-labels.md
Title: Azure App Configuration REST API - Labels description: Reference pages for working with labels using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-locks.md
Title: Azure App Configuration REST API - locks description: Reference pages for working with key-value locks by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-postman.md
 Title: Azure Active Directory REST API - Test by using Postman description: Use Postman to test the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-revisions.md
Title: Azure App Configuration REST API - key-value revisions description: Reference pages for working with key-value revisions by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-throttling.md
Title: Azure App Configuration REST API - Throttling description: Reference pages for understanding throttling when using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-versioning.md
Title: Azure App Configuration REST API - versioning description: Reference pages for versioning by using the Azure App Configuration REST API--++ Last updated 08/17/2020
azure-app-configuration Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api.md
Title: Azure App Configuration REST API description: Reference pages for the Azure App Configuration REST API--++ Last updated 11/28/2022
azure-app-configuration Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-create-service.md
Title: Azure CLI Script Sample - Create an Azure App Configuration Store
description: Create an Azure App Configuration store using a sample Azure CLI script. See reference article links to commands used in the script. -+ Last updated 01/18/2023-+
azure-app-configuration Cli Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-delete-service.md
Title: Azure CLI Script Sample - Delete an Azure App Configuration Store
description: Delete an Azure App Configuration store using a sample Azure CLI script. See reference article links to commands used in the script. -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-export.md
Title: Azure CLI Script Sample - Export from an Azure App Configuration Store
description: Use Azure CLI script to export configuration from Azure App Configuration -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-import.md
Title: Azure CLI script sample - Import to an App Configuration store
description: Use Azure CLI script - Importing configuration to Azure App Configuration -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Cli Work With Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-work-with-keys.md
Title: Azure CLI Script Sample - Work with key-values in App Configuration Store
description: Use Azure CLI script to create, view, update and delete key values from App Configuration store -+ ms.devlang: azurecli Last updated 02/19/2020-+
azure-app-configuration Powershell Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-create-service.md
Title: PowerShell script sample - Create an Azure App Configuration store
description: Create an Azure App Configuration store using a sample PowerShell script. See reference article links to commands used in the script. -+ Last updated 02/12/2023-+
azure-app-configuration Powershell Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-delete-service.md
Title: PowerShell script sample - Delete an Azure App Configuration store
description: Delete an Azure App Configuration store using a sample PowerShell script. See reference article links to commands used in the script. -+ Last updated 02/02/2023-+
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration
description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 02/14/2023 --++
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
Title: Tutorial for using feature flags in a .NET Core app | Microsoft Docs
description: In this tutorial, you learn how to implement feature flags in .NET Core apps. documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 09/17/2020-+ #Customer intent: I want to control feature availability in my app by using the .NET Core Feature Manager library.
azure-app-configuration Use Key Vault References Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
Title: Tutorial for using Azure App Configuration Key Vault references in an ASP
description: In this tutorial, you learn how to use Azure App Configuration's Key Vault references from an ASP.NET Core app documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: csharp Last updated 04/08/2020-+ #Customer intent: I want to update my ASP.NET Core application to reference values stored in Key Vault through App Configuration.
azure-cache-for-redis Cache Redis Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-modules.md
With Azure Cache for Redis, you can use Redis modules as libraries to add more d
For more information on creating an Enterprise cache, see [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md).
-Modules were introduced in open-source Redis 4.0. The modules extend the use-cases of Redis by adding functionality like search capabilities and data structures like **bloom and cuckoo filters**.
+Modules were introduced in open-source Redis 4.0. The modules extend the use-cases of Redis by adding functionality like search capabilities and data structures like bloom and cuckoo filters.
## Scope of Redis modules
Features include:
- Geo-filtering - Boolean queries
-Additionally, **RediSearch** can function as a secondary index, expanding your cache beyond a key-value structure and offering more sophisticated queries.
+Additionally, **RediSearch** can function as a secondary index, expanding your cache beyond a key-value structure and offering more sophisticated queries.
-You can use **RediSearch** is used in a wide variety of use-cases, including real-time inventory, enterprise search, and in indexing external databases. [For more information, see the RediSearch documentation page](https://redis.io/docs/stack/search/).
+**RediSearch** also includes functionality to perform [vector similarity queries](https://redis.io/docs/stack/search/reference/vectors/) such as K-nearest neighbor (KNN) search. This feature allows Azure Cache for Redis to be used as a vector database, which is useful in AI use-cases like [semantic answer engines or any other application that requires the comparison of embeddings vectors](https://redis.com/blog/rediscover-redis-for-vector-similarity-search/) generated by machine learning models.
+
+You can use **RediSearch** is used in a wide variety of additional use-cases, including real-time inventory, enterprise search, and in indexing external databases. [For more information, see the RediSearch documentation page](https://redis.io/docs/stack/search/).
>[!IMPORTANT] > The RediSearch module can only be used with the `Enterprise` clustering policy. For more information, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy).
->[!NOTE]
-> The RediSearch module is the only module that can be used with active geo-replication.
- ### RedisBloom RedisBloom adds four probabilistic data structures to a Redis server: **bloom filter**, **cuckoo filter**, **count-min sketch**, and **top-k**. Each of these data structures offers a way to sacrifice perfect accuracy in return for higher speed and better memory efficiency.
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
namespace AzureSQL.ToDo
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequestData req, FunctionContext executionContext) {
- var logger = executionContext.GetLogger("HttpExample");
+ var logger = executionContext.GetLogger("PostToDo");
logger.LogInformation("C# HTTP trigger function processed a request."); string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives; using Newtonsoft.Json;
-public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog)
+public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem)
{ log.LogInformation("C# HTTP trigger function processed a request."); string requestBody = new StreamReader(req.Body).ReadToEnd(); todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
- requestLog = new RequestLog();
- requestLog.RequestTimeStamp = DateTime.Now;
- requestLog.ItemCount = 1;
- return new OkObjectResult(todoItem); }-
-public class RequestLog {
- public DateTime RequestTimeStamp { get; set; }
- public int ItemCount { get; set; }
-}
``` <a id="http-trigger-write-to-two-tables-csharpscript"></a>
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives; using Newtonsoft.Json;
-public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem)
+public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog)
{ log.LogInformation("C# HTTP trigger function processed a request."); string requestBody = new StreamReader(req.Body).ReadToEnd(); todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+ requestLog = new RequestLog();
+ requestLog.RequestTimeStamp = DateTime.Now;
+ requestLog.ItemCount = 1;
+ return new OkObjectResult(todoItem); }+
+public class RequestLog {
+ public DateTime RequestTimeStamp { get; set; }
+ public int ItemCount { get; set; }
+}
```
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
Title: Azure SQL trigger for Functions
description: Learn to use the Azure SQL trigger in Azure Functions. Previously updated : 11/10/2022 Last updated : 4/14/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL trigger for Functions (preview) - > [!NOTE]
-> The Azure SQL trigger is only supported on **Premium and Dedicated** plans. Consumption is not currently supported.
+> The Azure SQL trigger for Functions is currently in preview and requires that a preview extension library or extension bundle is used.
-The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted.
+The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted. For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md).
-For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md).
+The Azure SQL trigger scaling decisions for the Consumption and Premium plans are done via target-based scaling. For more information, see [Target-based scaling](functions-target-based-scaling.md).
## Functionality Overview
-The Azure SQL Trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level, the loop looks like this:
+The Azure SQL trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level, the loop looks like this:
``` while (true) {
Changes are processed in the order that their changes were made, with the oldest
For more information on change tracking and how it's used by applications such as Azure SQL triggers, see [work with change tracking](/sql/relational-databases/track-changes/work-with-change-tracking-sql-server) . ++ ## Example usage
+<a id="example"></a>
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
+
+# [In-process](#tab/in-process)
+
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-csharp).
The example refers to a `ToDoItem` class and a corresponding database table:
The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange`
- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class. - **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.
-# [In-process](#tab/in-process)
- The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table: ```cs
namespace AzureSQL.ToDo
# [Isolated process](#tab/isolated-process)
-Isolated worker process isn't currently supported.
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-outofproc).
++
+The example refers to a `ToDoItem` class and a corresponding database table:
+++
+[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties:
+- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class.
+- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table:
+
+```cs
+using System;
+using System.Collections.Generic;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Extensions.Logging;
+using Newtonsoft.Json;
++
+namespace AzureSQL.ToDo
+{
+ public static class ToDoTrigger
+ {
+ [FunctionName("ToDoTrigger")]
+ public static void Run(
+ [SqlTrigger("[dbo].[ToDo]", "SqlConnectionString")]
+ IReadOnlyList<SqlChange<ToDoItem>> changes,
+ FunctionContext context)
+ {
+ var logger = context.GetLogger("ToDoTrigger");
+ foreach (SqlChange<ToDoItem> change in changes)
+ {
+ ToDoItem toDoItem = change.Item;
+ logger.LogInformation($"Change operation: {change.Operation}");
+ logger.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}");
+ }
+ }
+ }
+}
+```
+
-<!-- Uncomment to support C# script examples.
# [C# Script](#tab/csharp-script) >
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-csharpscript).
++
+The example refers to a `ToDoItem` class and a corresponding database table:
+++
+[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties:
+- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class.
+- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.
+
+The following example shows a SQL trigger in a function.json file and a [C# script function](functions-reference-csharp.md) that is invoked when there are changes to the `ToDo` table:
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "name": "todoChanges",
+ "type": "sqlTrigger",
+ "direction": "in",
+ "tableName": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+The following is the C# script function:
+
+```csharp
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static void Run(IReadOnlyList<SqlChange<ToDoItem>> todoChanges, ILogger log)
+{
+ log.LogInformation($"C# SQL trigger function processed a request.");
+
+ foreach (SqlChange<ToDoItem> change in todoChanges)
+ {
+ ToDoItem toDoItem = change.Item;
+ log.LogInformation($"Change operation: {change.Operation}");
+ log.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}");
+ }
+}
+```
+ ++
+## Example usage
+<a id="example"></a>
+
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-java).
++
+The example refers to a `ToDoItem` class, a `SqlChangeToDoItem` class, a `SqlChangeOperation` enum, and a corresponding database table:
+
+In a separate file `ToDoItem.java`:
+
+```java
+package com.function;
+import java.util.UUID;
+
+public class ToDoItem {
+ public UUID Id;
+ public int order;
+ public String title;
+ public String url;
+ public boolean completed;
+
+ public ToDoItem() {
+ }
+
+ public ToDoItem(UUID Id, int order, String title, String url, boolean completed) {
+ this.Id = Id;
+ this.order = order;
+ this.title = title;
+ this.url = url;
+ this.completed = completed;
+ }
+}
+```
+
+In a separate file `SqlChangeToDoItem.java`:
+```java
+package com.function;
+
+public class SqlChangeToDoItem {
+ public ToDoItem item;
+ public SqlChangeOperation operation;
+
+ public SqlChangeToDoItem() {
+ }
+
+ public SqlChangeToDoItem(ToDoItem item, SqlChangeOperation operation) {
+ this.item = item;
+ this.operation = operation;
+ }
+}
+```
+
+In a separate file `SqlChangeOperation.java`:
+```java
+package com.function;
+
+import com.google.gson.annotations.SerializedName;
+
+public enum SqlChangeOperation {
+ @SerializedName("0")
+ Insert,
+ @SerializedName("1")
+ Update,
+ @SerializedName("2")
+ Delete;
+}
+```
++
+[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds to a `SqlChangeToDoItem[]`, an array of `SqlChangeToDoItem` objects each with two properties:
+- **item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class.
+- **operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.
++
+The following example shows a Java function that is invoked when there are changes to the `ToDo` table:
+
+```java
+package com.function;
+
+import com.microsoft.azure.functions.ExecutionContext;
+import com.microsoft.azure.functions.annotation.FunctionName;
+import com.microsoft.azure.functions.sql.annotation.SQLTrigger;
+import com.function.Common.SqlChangeToDoItem;
+import com.google.gson.Gson;
+
+import java.util.logging.Level;
+
+public class ProductsTrigger {
+ @FunctionName("ToDoTrigger")
+ public void run(
+ @SQLTrigger(
+ name = "todoItems",
+ tableName = "[dbo].[ToDo]",
+ connectionStringSetting = "SqlConnectionString")
+ SqlChangeToDoItem[] todoItems,
+ ExecutionContext context) {
+
+ context.getLogger().log(Level.INFO, "SQL Changes: " + new Gson().toJson(changes));
+ }
+}
+```
++++
+## Example usage
+<a id="example"></a>
+
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-powershell).
++
+The example refers to a `ToDoItem` database table:
++
+[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds to `todoChanges`, a list of objects each with two properties:
+- **item:** the item that was changed. The structure of the item will follow the table schema.
+- **operation:** the possible values are `Insert`, `Update`, and `Delete`.
++
+The following example shows a PowerShell function that is invoked when there are changes to the `ToDo` table.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "name": "todoChanges",
+ "type": "sqlTrigger",
+ "direction": "in",
+ "tableName": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
++
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+using namespace System.Net
+
+param($todoChanges)
+# The output is used to inspect the trigger binding parameter in test methods.
+# Use -Compress to remove new lines and spaces for testing purposes.
+$changesJson = $todoChanges | ConvertTo-Json -Compress
+Write-Host "SQL Changes: $changesJson"
+```
++++++
+## Example usage
+<a id="example"></a>
+
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-js).
++
+The example refers to a `ToDoItem` database table:
++
+[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds `todoChanges`, an array of objects each with two properties:
+- **item:** the item that was changed. The structure of the item will follow the table schema.
+- **operation:** the possible values are `Insert`, `Update`, and `Delete`.
++
+The following example shows a JavaScript function that is invoked when there are changes to the `ToDo` table.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "name": "todoChanges",
+ "type": "sqlTrigger",
+ "direction": "in",
+ "tableName": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
++
+The [configuration](#configuration) section explains these properties.
+
+The following is sample JavaScript code for the function in the `index.js` file:
+
+```javascript
+module.exports = async function (context, todoChanges) {
+ context.log(`SQL Changes: ${JSON.stringify(todoChanges)}`)
+}
+```
+++++
+## Example usage
+<a id="example"></a>
+
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/release/trigger/samples/samples-python).
++
+The example refers to a `ToDoItem` database table:
+++
+[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds to a variable `todoChanges`, a list of objects each with two properties:
+- **item:** the item that was changed. The structure of the item will follow the table schema.
+- **operation:** the possible values are `Insert`, `Update`, and `Delete`.
++
+The following example shows a Python function that is invoked when there are changes to the `ToDo` table.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "name": "todoChanges",
+ "type": "sqlTrigger",
+ "direction": "in",
+ "tableName": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
++
+The [configuration](#configuration) section explains these properties.
+
+The following is sample Python code for the function in the `__init__.py` file:
+
+```python
+import json
+import logging
+
+def main(changes):
+ logging.info("SQL Changes: %s", json.loads(changes))
+```
++++++ ## Attributes The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties:
The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https:
| **TableName** | Required. The name of the table monitored by the trigger. | | **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
-## Configuration
-<!-- ### for another day ###
+++
+## Annotations
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@SQLTrigger` annotation (`com.microsoft.azure.functions.sql.annotation.SQLTrigger`) on parameters whose value would come from Azure SQL. This annotation supports the following elements:
+
+| Element |Description|
+|||
+| **name** | Required. The name of the parameter that the trigger binds to. |
+| **tableName** | Required. The name of the table monitored by the trigger. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
+++
+## Configuration
The following table explains the binding configuration properties that you set in the function.json file. |function.json property | Description|
+||-|
+| **name** | Required. The name of the parameter that the trigger binds to. |
+| **type** | Required. Must be set to `sqlTrigger`. |
+| **direction** | Required. Must be set to `in`. |
+| **tableName** | Required. The name of the table monitored by the trigger. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
>
+## Optional Configuration
In addition to the required ConnectionStringSetting [application setting](./functions-how-to-use-azure-function-app-settings.md#settings), the following optional settings can be configured for the SQL trigger:
If the function execution fails five times in a row for a given row then that ro
- [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md) - [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md) -
-> [!NOTE]
-> In the current preview, Azure SQL triggers are only supported by [C# class library functions](functions-dotnet-class-library.md)
-
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
description: Understand how to use Azure SQL bindings in Azure Functions.
Previously updated : 4/7/2023 Last updated : 4/14/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
Add the Java library for SQL bindings to your functions project with an update t
<dependency> <groupId>com.microsoft.azure.functions</groupId> <artifactId>azure-functions-java-library-sql</artifactId>
- <version>0.1.1</version>
+ <version>2.0.0-preview</version>
</dependency> ```
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
The following considerations apply to using a warmup function in C#:
# [Isolated process](#tab/isolated-process) -- Your function must be named `warmup` (case-insensitive) using the `FunctionName` attribute.
+- Your function must be named `warmup` (case-insensitive) using the `Function` attribute.
- A return value attribute isn't required. - You can pass an object instance to the function.
azure-functions Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md
The following are a common, _but by no means exhaustive_, set of scenarios for A
| **Create reliable message queue systems** | Process message queues using [Queue Storage](./functions-bindings-storage-queue.md), [Service Bus](./functions-bindings-service-bus.md), or [Event Hubs](./functions-bindings-event-hubs.md) | | **Analyze IoT data streams** | Collect and process [data from IoT devices](./functions-bindings-event-iot.md) | | **Process data in real time** | Use [Functions and SignalR](./functions-bindings-signalr-service.md) to respond to data in the moment |
+| **Connect to a SQL database** | Use [SQL bindings](./functions-bindings-azure-sql.md) to read or write data from Azure SQL |
These scenarios allow you to build event-driven systems using modern architectural patterns.
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
The following components support identity-based connections:
| Azure Blobs triggers and bindings | All | [Azure Blobs extension version 5.0.0 or later][blobv5],<br/>[Extension bundle 3.3.0 or later][blobv5] | | Azure Queues triggers and bindings | All | [Azure Queues extension version 5.0.0 or later][queuev5],<br/>[Extension bundle 3.3.0 or later][queuev5] | | Azure Tables (when using Azure Storage) | All | [Azure Tables extension version 1.0.0 or later](./functions-bindings-storage-table.md#table-api-extension),<br/>[Extension bundle 3.3.0 or later][tablesv1] |
+| Azure SQL Database | All | [Connect a function app to Azure SQL with managed identity and SQL bindings][azuresql-identity]
| Azure Event Hubs triggers and bindings | All | [Azure Event Hubs extension version 5.0.0 or later][eventhubv5],<br/>[Extension bundle 3.3.0 or later][eventhubv5] | | Azure Service Bus triggers and bindings | All | [Azure Service Bus extension version 5.0.0 or later][servicebusv5],<br/>[Extension bundle 3.3.0 or later][servicebusv5] | | Azure Cosmos DB triggers and bindings | All | [Azure Cosmos DB extension version 4.0.0 or later][cosmosv4],<br/> [Extension bundle 4.0.2 or later][cosmosv4]|
The following components support identity-based connections:
[tablesv1]: ./functions-bindings-storage-table.md#table-api-extension [signalr]: ./functions-bindings-signalr-service.md#install-extension [durable-identity]: ./durable/durable-functions-configure-durable-functions-with-credentials.md
+[azuresql-identity]: ./functions-identity-access-azure-sql-with-managed-identity.md
[!INCLUDE [functions-identity-based-connections-configuration](../../includes/functions-identity-based-connections-configuration.md)]
azure-maps How To Manage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-account-keys.md
You can manage your Azure Maps account through the Azure portal. After you have
## Prerequisites -- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.-- For picking account location and you're unfamiliar with managed identities for Azure resources, check out the [overview section](../active-directory/managed-identities-azure-resources/overview.md).
+- If you don't already have an Azure account, [sign up for a free account] before you continue.
+- For picking account location, if you're unfamiliar with managed identities for Azure resources, see [managed identities for Azure resources].
## Account location
-Picking a location for your Azure Maps account that aligns with other resources in your subscription, like managed identities, may help to improve the level of service for [control-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) operations.
+Picking a location for your Azure Maps account that aligns with other resources in your subscription, like managed identities, may help to improve the level of service for [control-plane] operations.
-As an example, the managed identity infrastructure will communicate and notify the Azure Maps management services for changes to the identity resource such as credential renewal or deletion. Sharing the same Azure location enables a consistent infrastructure provisioning for all resources.
+As an example, the managed identity infrastructure notifies the Azure Maps management services for changes to the identity resource such as credential renewal or deletion. Sharing the same Azure location enables a consistent infrastructure provisioning for all resources.
-Any Azure Maps REST API on endpoint `atlas.microsoft.com`, `*.atlas.microsoft.com`, or other endpoints belonging to the Azure data-plane are not affected by the choice of the Azure Maps account location.
+An Azure Maps account, regardless of location, can access any endpoint belonging to the Azure data-plane, such as `atlas.microsoft.com` and `*.atlas.microsoft.com`, when using Azure Maps REST API.
-Read more about data-plane service coverage for Azure Maps services on [geographic coverage](./geographic-coverage.md).
+Read more about data-plane service coverage for Azure Maps services on [geographic coverage].
## Create a new account
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal].
2. Select **Create a resource** in the upper-left corner of the Azure portal.
You then see a confirmation page. You can confirm the deletion of your account b
Set up authentication with Azure Maps and learn how to get an Azure Maps subscription key: > [!div class="nextstepaction"]
-> [Manage authentication](how-to-manage-authentication.md)
+> [Manage authentication]
Learn how to manage an Azure Maps account pricing tier: > [!div class="nextstepaction"]
-> [Manage a pricing tier](how-to-manage-pricing-tier.md)
+> [Manage a pricing tier]
Learn how to see the API usage metrics for your Azure Maps account: > [!div class="nextstepaction"]
-> [View usage metrics](how-to-view-api-usage.md)
+> [View usage metrics]
+
+[Azure portal]: https://portal.azure.com
+[control-plane]: ../azure-resource-manager/management/control-plane-and-data-plane.md
+[geographic coverage]: geographic-coverage.md
+[Manage a pricing tier]: how-to-manage-pricing-tier.md
+[Manage authentication]: how-to-manage-authentication.md
+[managed identities for Azure resources]: ../active-directory/managed-identities-azure-resources/overview.md
+[sign up for a free account]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F
+[View usage metrics]: how-to-view-api-usage.md
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-authentication.md
custom.ms: subject-rbac-steps
# Manage authentication in Azure Maps
-When you create an Azure Maps account, your client ID is automatically generated along with primary and secondary keys that are required for authentication when using [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) or [Shared Key authentication](./azure-maps-authentication.md#shared-key-authentication).
+When you create an Azure Maps account, your client ID and shared keys are created automatically. These values are required for authentication when using either [Azure Active Directory (Azure AD)] or [Shared Key authentication].
## Prerequisites
-Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-- A familiarization with [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). Be sure to understand the two [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and how they differ.-- [An Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).-- A familiarization with [Azure Maps Authentication](./azure-maps-authentication.md).
+Sign in to the [Azure portal]. If you don't have an Azure subscription, create a [free account] before you begin.
+
+- A familiarization with [managed identities for Azure resources]. Be sure to understand the two [Managed identity types] and how they differ.
+- [An Azure Maps account].
+- A familiarization with [Azure Maps Authentication].
## View authentication details
Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Az
To view your Azure Maps authentication details:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal].
2. Select **All resources** in the **Azure services** section, then select your Azure Maps account.
To view your Azure Maps authentication details:
## Choose an authentication category
-Depending on your application needs, there are specific pathways to application security. Azure AD defines specific authentication categories to support a wide range of authentication flows. To choose the best category for your application, see [application categories](../active-directory/develop/authentication-flows-app-scenarios.md#application-categories).
+Depending on your application needs, there are specific pathways to application security. Azure AD defines specific authentication categories to support a wide range of authentication flows. To choose the best category for your application, see [application categories].
> [!NOTE] > Understanding categories and scenarios will help you secure your Azure Maps application, whether you use Azure Active Directory or shared key authentication. ## How to add and remove managed identities
-To enable [Shared access signature (SAS) token authentication](./azure-maps-authentication.md#shared-access-signature-token-authentication) with the Azure Maps REST API you need to add a user-assigned managed identity to your Azure Maps account.
+To enable [Shared access signature (SAS) token authentication] with the Azure Maps REST API, you need to add a user-assigned managed identity to your Azure Maps account.
### Create a managed identity
-You can create a user-assigned managed identity before or after creating a map account. You can add the managed identity through the portal, Azure management SDKs, or the Azure Resource Manager (ARM) template. To add a user-assigned managed identity through an ARM template, specify the resource identifier of the user-assigned managed identity. See example below:
+You can create a user-assigned managed identity before or after creating a map account. You can add the managed identity through the portal, Azure management SDKs, or the Azure Resource Manager (ARM) template. To add a user-assigned managed identity through an ARM template, specify the resource identifier of the user-assigned managed identity.
```json "identity": {
You can create a user-assigned managed identity before or after creating a map a
You can remove a system-assigned identity by disabling the feature through the portal or the Azure Resource Manager template in the same way that it was created. User-assigned identities can be removed individually. To remove all identities, set the identity type to `"None"`.
-Removing a system-assigned identity in this way will also delete it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the Azure Maps account is deleted.
+Removing a system-assigned identity in this way also deletes it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the Azure Maps account is deleted.
To remove all identities by using the Azure Resource Manager template, update this section:
To remove all identities by using the Azure Resource Manager template, update th
## Choose an authentication and authorization scenario
-This table outlines common authentication and authorization scenarios in Azure Maps. Each scenario describes a type of app which can be used to access Azure Maps REST API. Use the links to learn detailed configuration information for each scenario.
+This table outlines common authentication and authorization scenarios in Azure Maps. Each scenario describes a type of app that can be used to access Azure Maps REST API. Use the links to learn detailed configuration information for each scenario.
> [!IMPORTANT] > For production applications, we recommend implementing Azure AD with Azure role-based access control (Azure RBAC).
-| Scenario | Authentication | Authorization | Development effort | Operational effort |
-| -- | -- | - | | |
-| [Trusted daemon app or non-interactive client app](./how-to-secure-daemon-app.md) | Shared Key | N/A | Medium | High |
-| [Trusted daemon or non-interactive client app](./how-to-secure-daemon-app.md) | Azure AD | High | Low | Medium |
-| [Web single page app with interactive single-sign-on](./how-to-secure-spa-users.md) | Azure AD | High | Medium | Medium |
-| [Web single page app with non-interactive sign-on](./how-to-secure-spa-app.md) | Azure AD | High | Medium | Medium |
-| [Web app, daemon app, or non-interactive sign-on app](./how-to-secure-sas-app.md) | SAS Token | High | Medium | Low |
-| [Web application with interactive single-sign-on](./how-to-secure-webapp-users.md) | Azure AD | High | High | Medium |
-| [IoT device or an input constrained application](./how-to-secure-device-code.md) | Azure AD | High | Medium | Medium |
+| Scenario | Authentication | Authorization | Development effort | Operational effort |
+| --| -- | - | | |
+| [Trusted daemon app or non-interactive client app] | Shared Key | N/A | Medium | High |
+| [Trusted daemon or non-interactive client app] | Azure AD | High | Low | Medium |
+| [Web single page app with interactive single-sign-on]| Azure AD | High | Medium | Medium |
+| [Web single page app with non-interactive sign-on] | Azure AD | High | Medium | Medium |
+| [Web app, daemon app, or non-interactive sign-on app]| SAS Token | High | Medium | Low |
+| [Web application with interactive single-sign-on] | Azure AD | High | High | Medium |
+| [IoT device or an input constrained application] | Azure AD | High | Medium | Medium |
## View built-in Azure Maps role definitions
Request a token from the Azure AD token endpoint. In your Azure AD request, use
| Azure public cloud | `https://login.microsoftonline.com` | `https://atlas.microsoft.com/` | | Azure Government cloud | `https://login.microsoftonline.us` | `https://atlas.microsoft.com/` |
-For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md). To view specific scenarios, see [the table of scenarios](./how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario).
+For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD]. To view specific scenarios, see [the table of scenarios].
## Manage and rotate shared keys
Your Azure Maps subscription keys are similar to a root password for your Azure
### Manually rotate subscription keys
-To help keep your Azure Maps account secure, we recommend periodically rotating your subscription keys. If possible, use Azure Key Vault to manage your access keys. If you aren't using Key Vault, you'll need to manually rotate your keys.
+To help keep your Azure Maps account secure, we recommend periodically rotating your subscription keys. If possible, use Azure Key Vault to manage your access keys. If you aren't using Key Vault, you need to manually rotate your keys.
Two subscription keys are assigned so that you can rotate your keys. Having two keys ensures that your application maintains access to Azure Maps throughout the process. To rotate your Azure Maps subscription keys in the Azure portal: 1. Update your application code to reference the secondary key for the Azure Maps account and deploy.
-2. In the [Azure portal](https://portal.azure.com/), navigate to your Azure Maps account.
+2. In the [Azure portal], navigate to your Azure Maps account.
3. Under **Settings**, select **Authentication**. 4. To regenerate the primary key for your Azure Maps account, select the **Regenerate** button next to the primary key. 5. Update your application code to reference the new primary key and deploy.
To rotate your Azure Maps subscription keys in the Azure portal:
Find the API usage metrics for your Azure Maps account: > [!div class="nextstepaction"]
-> [View usage metrics](how-to-view-api-usage.md)
+> [View usage metrics]
Explore samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]
-> [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
+> [Azure AD authentication samples]
+
+[Azure portal]: https://portal.azure.com/
+[Azure AD authentication samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples
+[View usage metrics]: how-to-view-api-usage.md
+[Authentication scenarios for Azure AD]: ../active-directory/develop/authentication-vs-authorization.md
+[the table of scenarios]: how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario
+[Trusted daemon app or non-interactive client app]: how-to-secure-daemon-app.md
+[Trusted daemon or non-interactive client app]: how-to-secure-daemon-app.md
+[Web single page app with interactive single-sign-on]: how-to-secure-spa-users.md
+[Web single page app with non-interactive sign-on]: how-to-secure-spa-app.md
+[Web app, daemon app, or non-interactive sign-on app]: how-to-secure-sas-app.md
+[Web application with interactive single-sign-on]: how-to-secure-webapp-users.md
+[IoT device or an input constrained application]: how-to-secure-device-code.md
+[Shared access signature (SAS) token authentication]: azure-maps-authentication.md#shared-access-signature-token-authentication
+[application categories]: ../active-directory/develop/authentication-flows-app-scenarios.md#application-categories
+[Azure Active Directory (Azure AD)]: ../active-directory/fundamentals/active-directory-whatis.md
+[Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication
+[free account]: https://azure.microsoft.com/free/
+[managed identities for Azure resources]: ../active-directory/managed-identities-azure-resources/overview.md
+[Managed identity types]: ../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types
+[An Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps Authentication]: azure-maps-authentication.md
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md
Title: Manage Microsoft Azure Maps Creator
-description: In this article, you'll learn how to manage Microsoft Azure Maps Creator.
+description: This article demonstrates how to manage Microsoft Azure Maps Creator.
Last updated 01/20/2022
# Manage Azure Maps Creator
-You can use Azure Maps Creator to create private indoor map data. Using the Azure Maps API and the Indoor Maps module, you can develop interactive and dynamic indoor map web applications. For pricing information, see the *Creator* section in [Azure Maps pricing](https://aka.ms/CreatorPricing).
+You can use Azure Maps Creator to create private indoor map data. Using the Azure Maps API and the Indoor Maps module, you can develop interactive and dynamic indoor map web applications. For pricing information, see the *Creator* section in [Azure Maps pricing].
This article takes you through the steps to create and delete a Creator resource in an Azure Maps account. ## Create Creator resource
-1. Sign in to the [Azure portal](https://portal.azure.com)
+1. Sign in to the [Azure portal].
2. Navigate to the Azure portal menu. Select **All resources**, and then select your Azure Maps account.
To delete the Creator resource:
:::image type="content" source="./media/how-to-manage-creator/creator-delete.png" alt-text="A screenshot of the Azure Maps Creator Resource page with the delete button highlighted.":::
-3. You'll be asked to confirm deletion by typing in the name of your Creator resource. After the resource is deleted, you see a confirmation page that looks like the following:
+3. You're prompted to confirm deletion by typing in the name of your Creator resource. After the resource is deleted, you see a confirmation page that looks like the following example:
:::image type="content" source="./media/how-to-manage-creator/creator-confirm-delete.png" alt-text="A screenshot of the Azure Maps Creator Resource deletion confirmation page.":::
To delete the Creator resource:
Creator inherits Azure Maps Access Control (IAM) settings. All API calls for data access must be sent with authentication and authorization rules.
-Creator usage data is incorporated in your Azure Maps usage charts and activity log. For more information, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md).
+Creator usage data is incorporated in your Azure Maps usage charts and activity log. For more information, see [Manage authentication in Azure Maps].
>[!Important] >We recommend using: >
-> * Azure Active Directory (Azure AD) in all solutions that are built with an Azure Maps account using Creator services. For more information, on Azure AD, see [Azure AD authentication](azure-maps-authentication.md#azure-ad-authentication).
+> * Azure Active Directory (Azure AD) in all solutions that are built with an Azure Maps account using Creator services. For more information, on Azure AD, see [Azure AD authentication].
>
->* Role-based access control settings (RBAC). Using these settings, map makers can act as the Azure Maps Data Contributor role, and Creator map data users can act as the Azure Maps Data Reader role. For more information, see [Authorization with role-based access control](azure-maps-authentication.md#authorization-with-role-based-access-control).
+>* Role-based access control settings (RBAC). Using these settings, map makers can act as the Azure Maps Data Contributor role, and Creator map data users can act as the Azure Maps Data Reader role. For more information, see [Authorization with role-based access control].
## Access to Creator services
-Creator services and services that use data hosted in Creator (for example, Render service), are accessible at a geographical URL. The geographical URL is determined by the location selected during creation. For example, if Creator is created in a region in the United States geographical location, all calls to the Conversion service must be submitted to `us.atlas.microsoft.com/conversions`. To view mappings of region to geographical location, [see Creator service geographic scope](creator-geographic-scope.md).
+Creator services and services that use data hosted in Creator (for example, Render service), are accessible at a geographical URL. The geographical URL determines the location selected during creation. For example, if Creator is created in a region in the United States geographical location, all calls to the Conversion service must be submitted to `us.atlas.microsoft.com/conversions`. To view mappings of region to geographical location, [see Creator service geographic scope].
Also, all data imported into Creator should be uploaded into the same geographical location as the Creator resource. For example, if Creator is provisioned in the United States, all raw data should be uploaded via `us.atlas.microsoft.com/mapData/upload`.
Also, all data imported into Creator should be uploaded into the same geographic
Introduction to Creator services for indoor mapping: > [!div class="nextstepaction"]
-> [Data upload](creator-indoor-maps.md#upload-a-drawing-package)
+> [Data upload]
> [!div class="nextstepaction"]
-> [Data conversion](creator-indoor-maps.md#convert-a-drawing-package)
+> [Data conversion]
> [!div class="nextstepaction"]
-> [Dataset](creator-indoor-maps.md#datasets)
+> [Dataset]
> [!div class="nextstepaction"]
-> [Tileset](creator-indoor-maps.md#tilesets)
+> [Tileset]
> [!div class="nextstepaction"]
-> [Feature State set](creator-indoor-maps.md#feature-statesets)
+> [Feature State set]
Learn how to use the Creator services to render indoor maps in your application: > [!div class="nextstepaction"]
-> [Azure Maps Creator tutorial](tutorial-creator-indoor-maps.md)
+> [Azure Maps Creator tutorial]
> [!div class="nextstepaction"]
-> [Indoor map dynamic styling](indoor-map-dynamic-styling.md)
+> [Indoor map dynamic styling]
> [!div class="nextstepaction"]
-> [Use the Indoor Maps module](how-to-use-indoor-module.md)
+> [Use the Indoor Maps module]
+
+[Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control
+[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication
+[Azure Maps Creator tutorial]: tutorial-creator-indoor-maps.md
+[Azure Maps pricing]: https://aka.ms/CreatorPricing
+[Azure portal]: https://portal.azure.com
+[Data conversion]: creator-indoor-maps.md#convert-a-drawing-package
+[Data upload]: creator-indoor-maps.md#upload-a-drawing-package
+[Dataset]: creator-indoor-maps.md#datasets
+[Feature State set]: creator-indoor-maps.md#feature-statesets
+[Indoor map dynamic styling]: indoor-map-dynamic-styling.md
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[see Creator service geographic scope]: creator-geographic-scope.md
+[Tileset]: creator-indoor-maps.md#tilesets
+[Use the Indoor Maps module]: how-to-use-indoor-module.md
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
Title: Azure Maps Web SDK best practices description: Learn tips & tricks to optimize your use of the Azure Maps Web SDK. -- Previously updated : 11/29/2021++ Last updated : 04/13/2023
Generally, when looking to improve performance of the map, look for ways to redu
## Security best practices
-For security best practices, see [Authentication and authorization best practices](authentication-best-practices.md).
+For more information on security best practices, see [Authentication and authorization best practices].
### Use the latest versions of Azure Maps
-The Azure Maps SDKs go through regular security testing along with any external dependency libraries that may be used by the SDKs. Any known security issue is fixed in a timely manner and released to production. If your application points to the latest major version of the hosted version of the Azure Maps Web SDK, it will automatically receive all minor version updates that will include security related fixes.
+The Azure Maps SDKs go through regular security testing along with any external dependency libraries used by the SDKs. Any known security issue is fixed in a timely manner and released to production. If your application points to the latest major version of the hosted version of the Azure Maps Web SDK, it automatically receives all minor version updates that include security related fixes.
-If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the caret (^) symbol to in combination with the Azure Maps npm package version number in your `package.json` file so that it will always point to the latest minor version.
+If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the caret (^) symbol to in combination with the Azure Maps npm package version number in your `package.json` file so that it points to the latest minor version.
```json "dependencies": {
- "azure-maps-control": "^2.0.30"
+ "azure-maps-control": "^2.2.6"
} ```
+> [!TIP]
+> Always use the latest version of the npm Azure Maps Control. For more information, see [azure-maps-control] in the npm documentation.
+ ## Optimize initial map load When a web page is loading, one of the first things you want to do is start rendering something as soon as possible so that the user isn't staring at a blank screen. ### Watch the maps ready event
-Similarly, when the map initially loads often it is desired to load data on it as quickly as possible, so the user isn't looking at an empty map. Since the map loads resources asynchronously, you have to wait until the map is ready to be interacted with before trying to render your own data on it. There are two events you can wait for, a `load` event and a `ready` event. The load event will fire after the map has finished completely loading the initial map view and every map tile has loaded. The ready event will fire when the minimal map resources needed to start interacting with the map. The ready event can often fire in half the time of the load event and thus allow you to start loading your data into the map sooner.
+Similarly, when the map initially loads often it's desired to load data on it as quickly as possible, so the user isn't looking at an empty map. Since the map loads resources asynchronously, you have to wait until the map is ready to be interacted with before trying to render your own data on it. There are two events you can wait for, a `load` event and a `ready` event. The load event will fire after the map has finished completely loading the initial map view and every map tile has loaded. The ready event fires when the minimal map resources needed to start interacting with the map. The ready event can often fire in half the time of the load event and thus allow you to start loading your data into the map sooner.
### Lazy load the Azure Maps Web SDK
-If the map isn't needed right away, lazy load the Azure Maps Web SDK until it is needed. This delays the loading of the JavaScript and CSS files used by the Azure Maps Web SDK until needed. A common scenario where this occurs is when the map is loaded in a tab or flyout panel that isn't displayed on page load.
+If the map isn't needed right away, lazy load the Azure Maps Web SDK until it's needed. This delays the loading of the JavaScript and CSS files used by the Azure Maps Web SDK until needed. A common scenario where this occurs is when the map is loaded in a tab or flyout panel that isn't displayed on page load.
The following code sample shows how to delay the loading the Azure Maps Web SDK until a button is pressed. <br/>
The following code sample shows how to delay the loading the Azure Maps Web SDK
### Add a placeholder for the map
-If the map takes a while to load due to network limitations or other priorities within your application, consider adding a small background image to the map `div` as a placeholder for the map. This fills the void of the map `div` while it is loading.
+If the map takes a while to load due to network limitations or other priorities within your application, consider adding a small background image to the map `div` as a placeholder for the map. This fills the void of the map `div` while it's loading.
### Set initial map style and camera options on initialization
-Often apps want to load the map to a specific location or style. Sometimes developers will wait until the map has loaded (or wait for the `ready` event), and then use the `setCemer` or `setStyle` functions of the map. This often takes longer to get to the desired initial map view since many resources end up being loaded by default before the resources needed for the desired map view are loaded. A better approach is to pass in the desired map camera and style options into the map when initializing it.
+Often apps want to load the map to a specific location or style. Sometimes developers wait until the map has loaded (or wait for the `ready` event), and then use the `setCamera` or `setStyle` functions of the map. This often takes longer to get to the desired initial map view since many resources end up being loaded by default before the resources needed for the desired map view are loaded. A better approach is to pass in the desired map camera and style options into the map when initializing it.
## Optimize data sources The Web SDK has two data sources,
-* **GeoJSON source**: Known as the `DataSource` class, manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features).
-* **Vector tile source**: Known at the `VectorTileSource` class, loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features).
+* **GeoJSON source**: The `DataSource` class, manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features).
+* **Vector tile source**: The `VectorTileSource` class, loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features).
### Use tile-based solutions for large datasets If working with larger datasets containing millions of features, the recommended way to achieve optimal performance is to expose the data using a server-side solution such as vector or raster image tile service. Vector tiles are optimized to load only the data that is in view with the geometries clipped to the focus area of the tile and generalized to match the resolution of the map for the zoom level of the tile.
-The [Azure Maps Creator platform](creator-indoor-maps.md) provides the ability to retrieve data in vector tile format. Other data formats can be using tools such as [Tippecanoe](https://github.com/mapbox/tippecanoe) or one of the many [resources list on this page](https://github.com/mapbox/awesome-vector-tiles).
+The [Azure Maps Creator platform] retrieves data in vector tile format. Other data formats can be using tools such as [Tippecanoe]. For more information on working with vector tiles, see the Mapbox [awesome-vector-tiles] readme in GitHub.
-It is also possible to create a custom service that renders datasets as raster image tiles on the server-side and load the data using the TileLayer class in the map SDK. This provides exceptional performance as the map only needs to load and manage a few dozen images at most. However, there are some limitations with using raster tiles since the raw data is not available locally. A secondary service is often required to power any type of interaction experience, for example, find out what shape a user clicked on. Additionally, the file size of a raster tile is often larger than a compressed vector tile that contains generalized and zoom level optimized geometries.
+It's also possible to create a custom service that renders datasets as raster image tiles on the server-side and load the data using the TileLayer class in the map SDK. This provides exceptional performance as the map only needs to load and manage a few dozen images at most. However, there are some limitations with using raster tiles since the raw data isn't available locally. A secondary service is often required to power any type of interaction experience, for example, find out what shape a user clicked on. Additionally, the file size of a raster tile is often larger than a compressed vector tile that contains generalized and zoom level optimized geometries.
-Learn more about data sources in the [Create a data source](create-data-source-web-sdk.md) document.
+For more information about data sources, see [Create a data source].
### Combine multiple datasets into a single vector tile source
-The less data sources the map has to manage, the faster it can process all features to be displayed. In particular, when it comes to tile sources, combining two vector tile sources together cuts the number of HTTP requests to retrieve the tiles in half, and the total amount of data would be slightly smaller since there is only one file header.
+The less data sources the map has to manage, the faster it can process all features to be displayed. In particular, when it comes to tile sources, combining two vector tile sources together cuts the number of HTTP requests to retrieve the tiles in half, and the total amount of data would be slightly smaller since there's only one file header.
-Combining multiple data sets in a single vector tile source can be achieved using a tool such as [Tippecanoe](https://github.com/mapbox/tippecanoe). Data sets can be combined into a single feature collection or separated into separate layers within the vector tile known as source-layers. When connecting a vector tile source to a rendering layer, you would specify the source-layer that contains the data that you want to render with the layer.
+Combining multiple data sets in a single vector tile source can be achieved using a tool such as [Tippecanoe]. Data sets can be combined into a single feature collection or separated into separate layers within the vector tile known as source-layers. When connecting a vector tile source to a rendering layer, you would specify the source-layer that contains the data that you want to render with the layer.
### Reduce the number of canvas refreshes due to data updates
-There are several ways data in a `DataSource` class can be added or updated. Listed below are the different methods and some considerations to ensure good performance.
+There are several ways data in a `DataSource` class can be added or updated. The following list shows the different methods and some considerations to ensure good performance.
-* The data sources `add` function can be used to add one or more features to a data source. Each time this function is called it will trigger a map canvas refresh. If adding many features, combine them into an array or feature collection and passing them into this function once, rather than looping over a data set and calling this function for each feature.
-* The data sources `setShapes` function can be used to overwrite all shapes in a data source. Under the hood, it combines the data sources `clear` and `add` functions together and does a single map canvas refresh instead of two, which is much faster. Be sure to use this when you want to update all data in a data source.
-* The data sources `importDataFromUrl` function can be used to load a GeoJSON file via a URL into a data source. Once the data has been downloaded, it is passed into the data sources `add` function. If the GeoJSON file is hosted on a different domain, be sure that the other domain supports cross domain requests (CORs). If it doesn't consider copying the data to a local file on your domain or creating a proxy service that has CORs enabled. If the file is large, consider converting it into a vector tile source.
-* If features are wrapped with the `Shape` class, the `addProperty`, `setCoordinates`, and `setProperties` functions of the shape will all trigger an update in the data source and a map canvas refresh. All features returned by the data sources `getShapes` and `getShapeById` functions are automatically wrapped with the `Shape` class. If you want to update several shapes, it is faster to convert them to JSON using the data sources `toJson` function, editing the GeoJSON, then passing this data into the data sources `setShapes` function.
+* The data sources `add` function can be used to add one or more features to a data source. Each time this function is called it triggers a map canvas refresh. If adding many features, combine them into an array or feature collection and passing them into this function once, rather than looping over a data set and calling this function for each feature.
+* The data sources `setShapes` function can be used to overwrite all shapes in a data source. Under the hood, it combines the data sources `clear` and `add` functions together and does a single map canvas refresh instead of two, which is faster. Be sure to use this function when you want to update all data in a data source.
+* The data sources `importDataFromUrl` function can be used to load a GeoJSON file via a URL into a data source. Once the data has been downloaded, it's passed into the data sources `add` function. If the GeoJSON file is hosted on a different domain, be sure that the other domain supports cross domain requests (CORs). If it doesn't consider copying the data to a local file on your domain or creating a proxy service that has CORs enabled. If the file is large, consider converting it into a vector tile source.
+* If features are wrapped with the `Shape` class, the `addProperty`, `setCoordinates`, and `setProperties` functions of the shape all trigger an update in the data source and a map canvas refresh. All features returned by the data sources `getShapes` and `getShapeById` functions are automatically wrapped with the `Shape` class. If you want to update several shapes, it's faster to convert them to JSON using the data sources `toJson` function, editing the GeoJSON, then passing this data into the data sources `setShapes` function.
### Avoid calling the data sources clear function unnecessarily Calling the clear function of the `DataSource` class causes a map canvas refresh. If the `clear` function is called multiple times in a row, a delay can occur while the map waits for each refresh to occur.
-A common scenario where this often appears in applications is when an app clears the data source, downloads new data, clears the data source again then adds the new data to the data source. Depending on the desired user experience, the following alternatives would be better.
+This is a common scenario in applications that clear the data source, download new data, clear the data source again, then adds the new data to the data source. Depending on the desired user experience, the following alternatives would be better.
-* Clear the data before downloading the new data, then pass the new data into the data sources `add` or `setShapes` function. If this is the only data set on the map, the map will be empty while the new data is downloading.
-* Download the new data, then pass it into the data sources `setShapes` function. This will replace all the data on the map.
+* Clear the data before downloading the new data, then pass the new data into the data sources `add` or `setShapes` function. If this is the only data set on the map, the map is empty while the new data is downloading.
+* Download the new data, then pass it into the data sources `setShapes` function. This replaces all the data on the map.
### Remove unused features and properties
If your dataset contains features that aren't going to be used in your app, remo
* Reduces the number of features that need to be looped through when rendering the data. * Can sometimes help simplify or remove data-driven expressions and filters, which mean less processing required at render time.
-When features have numerous properties or content, it is much more performant to limit what gets added to the data source to just those needed for rendering and to have a separate method or service for retrieving the additional property or content when needed. For example, if you have a simple map displaying locations on a map when clicked a bunch of detailed content is displayed. If you want to use data driven styling to customize how the locations are rendered on the map, only load the properties needed into the data source. When you want to display the detailed content, use the ID of the feature to retrieve the additional content separately. If the content is stored on the server-side, a service can be used to retrieve it asynchronously, which would drastically reduce the amount of data that needs to be downloaded when the map is initially loaded.
+When features have numerous properties or content, it's much more performant to limit what gets added to the data source to just those needed for rendering and to have a separate method or service for retrieving the other property or content when needed. For example, if you have a simple map displaying locations on a map when clicked a bunch of detailed content is displayed. If you want to use data driven styling to customize how the locations are rendered on the map, only load the properties needed into the data source. When you want to display the detailed content, use the ID of the feature to retrieve the other content separately. If the content is stored on the server, you can reduce the amount of data that needs to be downloaded when the map is initially loaded by using a service to retrieve it asynchronously.
-Additionally, reducing the number of significant digits in the coordinates of features can also significantly reduce the data size. It is not uncommon for coordinates to contain 12 or more decimal places; however, six decimal places have an accuracy of about 0.1 meter, which is often more precise than the location the coordinate represents (six decimal places is recommended when working with small location data such as indoor building layouts). Having any more than six decimal places will likely make no difference in how the data is rendered and will only require the user to download more data for no added benefit.
+Additionally, reducing the number of significant digits in the coordinates of features can also significantly reduce the data size. It isn't uncommon for coordinates to contain 12 or more decimal places; however, six decimal places have an accuracy of about 0.1 meter, which is often more precise than the location the coordinate represents (six decimal places is recommended when working with small location data such as indoor building layouts). Having any more than six decimal places will likely make no difference in how the data is rendered and requires the user to download more data for no added benefit.
-Here is a list of [useful tools for working with GeoJSON data](https://github.com/tmcw/awesome-geojson).
+Here's a list of [useful tools for working with GeoJSON data].
### Use a separate data source for rapidly changing data
-Sometimes there is a need to rapidly update data on the map for things such as showing live updates of streaming data or animating features. When a data source is updated, the rendering engine will loop through and render all features in the data source. Separating static data from rapidly changing data into different data sources can significantly reduce the number of features that are re-rendered on each update to the data source and improve overall performance.
+Sometimes there's a need to rapidly update data on the map for things such as showing live updates of streaming data or animating features. When a data source is updated, the rendering engine loops through and render all features in the data source. Improve overall performance by separating static from rapidly changing data into different data sources, reducing the number of features re-rendered during each update.
If using vector tiles with live data, an easy way to support updates is to use the `expires` response header. By default, any vector tile source or raster tile layer will automatically reload tiles when the `expires` date. The traffic flow and incident tiles in the map use this feature to ensure fresh real-time traffic data is displayed on the map. This feature can be disabled by setting the maps `refreshExpiredTiles` service option to `false`.
If using vector tiles with live data, an easy way to support updates is to use t
The `DataSource` class converts raw location data into vector tiles local for on-the-fly rendering. These local vector tiles clip the raw data to the bounds of the tile area with a bit of buffer to ensure smooth rendering between tiles. The smaller the `buffer` option is, the fewer overlapping data is stored in the local vector tiles and the better performance, however, the greater the change of rendering artifacts occurring. Try tweaking this option to get the right mix of performance with minimal rendering artifacts.
-The `DataSource` class also has a `tolerance` option that is used with the Douglas-Peucker simplification algorithm when reducing the resolution of geometries for rendering purposes. Increasing this tolerance value will reduce the resolution of geometries and in turn improve performance. Tweak this option to get the right mix of geometry resolution and performance for your data set.
+The `DataSource` class also has a `tolerance` option that is used with the Douglas-Peucker simplification algorithm when reducing the resolution of geometries for rendering purposes. Increasing this tolerance value reduces the resolution of geometries and in turn improve performance. Tweak this option to get the right mix of geometry resolution and performance for your data set.
### Set the max zoom option of GeoJSON data sources
-The `DataSource` class converts raw location data into vector tiles local for on-the-fly rendering. By default, it will do this until zoom level 18, at which point, when zoomed in closer, it will sample data from the tiles generated for zoom level 18. This works well for most data sets that need to have high resolution when zoomed in at these levels. However, when working with data sets that are more likely to be viewed when zoomed out more, such as when viewing state or province polygons, setting the `minZoom` option of the data source to a smaller value such as `12` will reduce the amount computation, local tile generation that occurs, and memory used by the data source and increase performance.
+The `DataSource` class converts raw location data into vector tiles local for on-the-fly rendering. By default, it does this until zoom level 18, at which point, when zoomed in closer, it samples data from the tiles generated for zoom level 18. This works well for most data sets that need to have high resolution when zoomed in at these levels. However, when working with data sets that are more likely to be viewed when zoomed out more, such as when viewing state or province polygons, setting the `minZoom` option of the data source to a smaller value such as `12` reduces the amount computation, local tile generation that occurs, and memory used by the data source and increase performance.
### Minimize GeoJSON response
When loading GeoJSON data from a server either through a service or by loading a
### Access raw GeoJSON using a URL
-It is possible to store GeoJSON objects inline inside of JavaScript, however this will use a lot of memory as copies of it will be stored across the variable you created for this object and the data source instance, which manages it within a separate web worker. Expose the GeoJSON to your app using a URL instead and the data source will load a single copy of data directly into the data sources web worker.
+It's possible to store GeoJSON objects inline inside of JavaScript, however this uses more memory as copies of it are stored across the variable you created for this object and the data source instance, which manages it within a separate web worker. Expose the GeoJSON to your app using a URL instead and the data source loads a single copy of data directly into the data sources web worker.
## Optimize rendering layers
Azure maps provides several different layers for rendering data on a map. There
### Create layers once and reuse them
-The Azure Maps Web SDK is decided to be data driven. Data goes into data sources, which are then connected to rendering layers. If you want to change the data on the map, update the data in the data source or change the style options on a layer. This is often much faster than removing and then recreating layers whenever there is a change.
+The Azure Maps Web SDK is data driven. Data goes into data sources, which are then connected to rendering layers. If you want to change the data on the map, update the data in the data source or change the style options on a layer. This is often faster than removing, then recreating layers with every change.
### Consider bubble layer over symbol layer
-The bubble layer renders points as circles on the map and can easily have their radius and color styled using a data-driven expression. Since the circle is a simple shape for WebGL to draw, the rendering engine will be able to render these much faster than a symbol layer, which has to load and render an image. The performance difference of these two rendering layers is noticeable when rendering tens of thousands of points.
+The bubble layer renders points as circles on the map and can easily have their radius and color styled using a data-driven expression. Since the circle is a simple shape for WebGL to draw, the rendering engine is able to render these faster than a symbol layer, which has to load and render an image. The performance difference of these two rendering layers is noticeable when rendering tens of thousands of points.
### Use HTML markers and Popups sparingly
-Unlike most layers in the Azure Maps Web control that use WebGL for rendering, HTML Markers and Popups use traditional DOM elements for rendering. As such, the more HTML markers and Popups added a page, the more DOM elements there are. Performance can degrade after adding a few hundred HTML markers or popups. For larger data sets, consider either clustering your data or using a symbol or bubble layer. For popups, a common strategy is to create a single popup and reuse it by updating its content and position as shown in the below example:
+Unlike most layers in the Azure Maps Web control that use WebGL for rendering, HTML Markers and Popups use traditional DOM elements for rendering. As such, the more HTML markers and Popups added a page, the more DOM elements there are. Performance can degrade after adding a few hundred HTML markers or popups. For larger data sets, consider either clustering your data or using a symbol or bubble layer. For popups, a common strategy is to create a single popup and reuse it by updating its content and position as shown in the following example:
<br/>
That said, if you only have a few points to render on the map, the simplicity of
### Combine layers
-The map is capable of rendering hundreds of layers, however, the more layers there are, the more time it takes to render a scene. One strategy to reduce the number of layers is to combine layers that have similar styles or can be styled using a [data-driven styles](data-driven-style-expressions-web-sdk.md).
+The map is capable of rendering hundreds of layers, however, the more layers there are, the more time it takes to render a scene. One strategy to reduce the number of layers is to combine layers that have similar styles or can be styled using a [data-driven styles].
-For example, consider a data set where all features have a `isHealthy` property that can have a value of `true` or `false`. If creating a bubble layer that renders different colored bubbles based on this property, there are several ways to do this as listed below from least performant to most performant.
+For example, consider a data set where all features have a `isHealthy` property that can have a value of `true` or `false`. If creating a bubble layer that renders different colored bubbles based on this property, there are several ways to do this as shown in the following list, from least performant to most performant.
* Split the data into two data sources based on the `isHealthy` value and attach a bubble layer with a hard-coded color option to each data source.
-* Put all the data into a single data source and create two bubble layers with a hard-coded color option and a filter based on the `isHealthy` property.
-* Put all the data into a single data source, create a single bubble layer with a `case` style expression for the color option based on the `isHealthy` property. Here is a code sample that demonstrates this.
+* Put all the data into a single data source and create two bubble layers with a hard-coded color option and a filter based on the `isHealthy` property.
+* Put all the data into a single data source, create a single bubble layer with a `case` style expression for the color option based on the `isHealthy` property. Here's a code sample that demonstrates this.
```javascript var layer = new atlas.layer.BubbleLayer(source, null, {
var layer = new atlas.layer.BubbleLayer(source, null, {
Symbol layers have collision detection enabled by default. This collision detection aims to ensure that no two symbols overlap. The icon and text options of a symbol layer have two options,
-* `allowOverlap` - specifies if the symbol will be visible if it collides with other symbols.
+* `allowOverlap` - specifies if the symbol is visible when it collides with other symbols.
* `ignorePlacement` - specifies if the other symbols are allowed to collide with the symbol.
-Both of these options are set to `false` by default. When animating a symbol, the collision detection calculations will run on each frame of the animation, which can slow down the animation and make it look less fluid. To smooth out the animation, set these options to `true`.
+Both of these options are set to `false` by default. When animating a symbol, the collision detection calculations run on each frame of the animation, which can slow down the animation and make it look less fluid. To smooth out the animation, set these options to `true`.
The following code sample a simple way to animate a symbol layer.
If your data meets one of the following criteria, be sure to specify the min and
* If the data is coming from a vector tile source, often source layers for different data types are only available through a range of zoom levels. * If using a tile layer that doesn't have tiles for all zoom levels 0 through 24 and you want it to only rendering at the levels it has tiles, and not try to fill in missing tiles with tiles from other zoom levels. * If you only want to render a layer at certain zoom levels.
-All layers have a `minZoom` and `maxZoom` option where the layer will be rendered when between these zoom levels based on this logic `maxZoom > zoom >= minZoom`.
+All layers have a `minZoom` and `maxZoom` option where the layer is rendered when between these zoom levels based on this logic `maxZoom > zoom >= minZoom`.
**Example**
var layer = new atlas.layer.BubbleLayer(dataSource, null, {
### Specify tile layer bounds and source zoom range
-By default, tile layers will load tiles across the whole globe. However, if the tile service only has tiles for a certain area the map will try to load tiles when outside of this area. When this happens, a request for each tile will be made and wait for a response that can block other requests being made by the map and thus slow down the rendering of other layers. Specifying the bounds of a tile layer will result in the map only requesting tiles that are within that bounding box. Also, if the tile layer is only available between certain zoom levels, specify the min and max source zoom for the same reason.
+By default, tile layers load tiles across the whole globe. However, if the tile service only has tiles for a certain area the map tries to load tiles when outside of this area. When this happens, a request for each tile is made and wait for a response that can block other requests being made by the map and thus slow down the rendering of other layers. Specifying the bounds of a tile layer results in the map only requesting tiles that are within that bounding box. Also, if the tile layer is only available between certain zoom levels, specify the min and max source zoom for the same reason.
**Example**
var tileLayer = new atlas.layer.TileLayer({
### Use a blank map style when base map not visible
-If a layer is being overlaid on the map that will completely cover the base map, consider setting the map style to `blank` or `blank_accessible` so that the base map isn't rendered. A common scenario for doing this is when overlaying a full globe tile at has no opacity or transparent area above the base map.
+If a layer is overlaid on the map that completely covers the base map, consider setting the map style to `blank` or `blank_accessible` so that the base map isn't rendered. A common scenario for doing this is when overlaying a full globe tile at has no opacity or transparent area above the base map.
### Smoothly animate image or tile layers
-If you want to animate through a series of image or tile layers on the map. It is often faster to create a layer for each image or tile layer and to change the opacity than to update the source of a single layer on each animation frame. Hiding a layer by setting the opacity to zero and showing a new layer by setting its opacity to a value greater than zero is much faster than updating the source in the layer. Alternatively, the visibility of the layers can be toggled, but be sure to set the fade duration of the layer to zero, otherwise it will animate the layer when displaying it, which will cause a flicker effect since the previous layer would have been hidden before the new layer is visible.
+If you want to animate through a series of image or tile layers on the map. It's often faster to create a layer for each image or tile layer and to change the opacity than to update the source of a single layer on each animation frame. Hiding a layer by setting the opacity to zero and showing a new layer by setting its opacity to a value greater than zero is faster than updating the source in the layer. Alternatively, the visibility of the layers can be toggled, but be sure to set the fade duration of the layer to zero, otherwise it animates the layer when displaying it, which causes a flicker effect since the previous layer would have been hidden before the new layer is visible.
### Tweak Symbol layer collision detection logic
-The symbol layer has two options that exist for both icon and text called `allowOverlap` and `ignorePlacement`. These two options specify if the icon or text of a symbol can overlap or be overlapped. When these are set to `false`, the symbol layer will do calculations when rendering each point to see if it collides with any other already rendered symbol in the layer, and if it does, will not render the colliding symbol. This is good at reducing clutter on the map and reducing the number of objects rendered. By setting these options to `false`, this collision detection logic will be skipped, and all symbols will be rendered on the map. Tweak this option to get the best combination of performance and user experience.
+The symbol layer has two options that exist for both icon and text called `allowOverlap` and `ignorePlacement`. These two options specify if the icon or text of a symbol can overlap or be overlapped. When these are set to `false`, the symbol layer does calculations when rendering each point to see if it collides with any other already rendered symbol in the layer, and if it does, don't render the colliding symbol. This is good at reducing clutter on the map and reducing the number of objects rendered. By setting these options to `false`, this collision detection logic is skipped, and all symbols are rendered on the map. Tweak this option to get the best combination of performance and user experience.
### Cluster large point data sets
-When working with large sets of data points you may find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms the map in, clusters will break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options.
+When working with large sets of data points you may find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms in the map, clusters break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options.
-Additionally, increase the size of the cluster radius to improve performance. The larger the cluster radius, the less clustered points there is to keep track of and render.
-Learn more in the [Clustering point data document](clustering-point-data-web-sdk.md)
+Additionally, increase the size of the cluster radius to improve performance. The larger the cluster radius, the less clustered points there's to keep track of and render.
+For more information, see [Clustering point data in the Web SDK].
### Use weighted clustered heat maps
-The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there will be little visual difference in the rendered heat map. Using a larger cluster radius will improve performance more but may reduce the resolution of the rendered heat map.
+The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there's little visual difference in the rendered heat map. Using a larger cluster radius improves performance more but may reduce the resolution of the rendered heat map.
```javascript var layer = new atlas.layer.HeatMapLayer(source, null, {
var layer = new atlas.layer.HeatMapLayer(source, null, {
}); ```
-Learn more in the [Clustering and heat maps in this document](clustering-point-data-web-sdk.md#clustering-and-the-heat-maps-layer)
+For more information, see [Clustering and the heat maps layer].
### Keep image resources small
-Images can be added to the maps image sprite for rendering icons in a symbol layer or patterns in a polygon layer. Keep these images small to minimize the amount of data that has to be downloaded and the amount of space they take up in the maps image sprite. When using a symbol layer that scales the icon using the `size` option, use an image that is the maximum size your plan to display on the map and no bigger. This ensures the icon is rendered with high resolution while minimizing the resources it uses. Additionally, SVG's can also be used as a smaller file format for simple icon images.
+Images can be added to the maps image sprite for rendering icons in a symbol layer or patterns in a polygon layer. Keep these images small to minimize the amount of data that has to be downloaded and the amount of space they take up in the maps image sprite. When using a symbol layer that scales the icon using the `size` option, use an image that is the maximum size your plan to display on the map and no bigger. This ensures the icon is rendered with high resolution while minimizing the resources it uses. Additionally, SVGs can also be used as a smaller file format for simple icon images.
## Optimize expressions
-[Data-driven style expressions](data-driven-style-expressions-web-sdk.md) provide a lot of flexibility and power for filtering and styling data on the map. There are many ways in which expressions can be optimized. Here are a few tips.
+[Data-driven style expressions] provide flexibility and power for filtering and styling data on the map. There are many ways in which expressions can be optimized. Here are a few tips.
### Reduce the complexity of filters
Filters loop over all data in a data source and check to see if each filter matc
### Make sure expressions don't produce errors
-Expressions are often used to generate code to perform calculations or logical operations at render time. Just like the code in the rest of your application, be sure the calculations and logical make sense and are not error prone. Errors in expressions will cause issues in evaluating the expression, which can result in reduced performance and rendering issues.
+Expressions are often used to generate code to perform calculations or logical operations at render time. Just like the code in the rest of your application, be sure the calculations and logical make sense and aren't error prone. Errors in expressions cause issues in evaluating the expression, which can result in reduced performance and rendering issues.
One common error to be mindful of is having an expression that relies on a feature property that might not exist on all features. For example, the following code uses an expression to set the color property of a bubble layer to the `myColor` property of a feature.
var layer = new atlas.layer.BubbleLayer(source, null, {
}); ```
-The above code will function fine if all features in the data source have a `myColor` property, and the value of that property is a color. This may not be an issue if you have complete control of the data in the data source and know for certain all features will have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green.
+The above code functions fine if all features in the data source have a `myColor` property, and the value of that property is a color. This may not be an issue if you have complete control of the data in the data source and know for certain all features have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green.
```javascript var layer = new atlas.layer.BubbleLayer(source, null, {
var layer = new atlas.layer.BubbleLayer(source, null, {
### Order boolean expressions from most specific to least specific
-When using boolean expressions that contain multiple conditional tests, order the conditional tests from most specific to least specific. By doing this, the first condition should reduce the amount of data the second condition has to be tested against, thus reducing the total number of conditional tests that need to be performed.
+Reduce the total number of conditional tests required when using boolean expressions that contain multiple conditional tests by ordering them from most to least specific.
### Simplify expressions
-Expressions can be powerful and sometimes complex. The simpler an expression is, the faster it will be evaluated. For example, if a simple comparison is needed, an expression like `['==', ['get', 'category'], 'restaurant']` would be better than using a match expression like `['match', ['get', 'category'], 'restaurant', true, false]`. In this case, if the property being checked is a boolean value, a `get` expression would be even simpler `['get','isRestaurant']`.
+Expressions can be powerful and sometimes complex. The simpler an expression is, the faster it's evaluated. For example, if a simple comparison is needed, an expression like `['==', ['get', 'category'], 'restaurant']` would be better than using a match expression like `['match', ['get', 'category'], 'restaurant', true, false]`. In this case, if the property being checked is a boolean value, a `get` expression would be even simpler `['get','isRestaurant']`.
## Web SDK troubleshooting
The following are some tips to debugging some of the common issues encountered w
**Why doesn't the map display when I load the web control?**
-Do the following:
+Things to check:
-* Ensure that you have added your added authentication options to the map. If this is not added, the map will load with a blank canvas since it can't access the base map data without authentication and 401 errors will appear in the network tab of the browser's developer tools.
+* Ensure that you complete your authentication options in the map. Without authentication, the map loads a blank canvas and returns a 401 error in the network tab of the browser's developer tools.
* Ensure that you have an internet connection. * Check the console for errors of the browser's developer tools. Some errors may cause the map not to render. Debug your application.
-* Ensure you are using a [supported browser](supported-browsers.md).
+* Ensure you're using a [supported browser].
**All my data is showing up on the other side of the world, what's going on?**
-Coordinates, also referred to as positions, in the Azure Maps SDKs aligns with the geospatial industry standard format of `[longitude, latitude]`. This same format is also how coordinates are defined in the GeoJSON schema; the core data formatted used within the Azure Maps SDKs. If your data is appearing on the opposite side of the world, it is most likely due to the longitude and latitude values being reversed in your coordinate/position information.
+Coordinates, also referred to as positions, in the Azure Maps SDKs aligns with the geospatial industry standard format of `[longitude, latitude]`. This same format is also how coordinates are defined in the GeoJSON schema; the core data formatted used within the Azure Maps SDKs. If your data is appearing on the opposite side of the world, it's most likely due to the longitude and latitude values being reversed in your coordinate/position information.
**Why are HTML markers appearing in the wrong place in the web control?**
Things to check:
**Why are icons or text in the symbol layer appearing in the wrong place?**
-Check that the `anchor` and the `offset` options are correctly configured to align with the part of your image or text that you want to have aligned with the coordinate on the map.
-If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols we will rotate with the maps viewport so that they appear upright to the user. However, depending on your scenario, it may be desirable to lock the symbol to the map's orientation. Set the `rotationAlignment` option to `'map'` to do this.
-If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols we will stay upright with the maps viewport as the map is pitched or tilted. However, depending on your scenario, it may be desirable to lock the symbol to the map's pitch. Set the `pitchAlignment` option to `'map'` to do this.
+Check that the `anchor` and the `offset` options are configured correctly to align with the part of your image or text that you want to have aligned with the coordinate on the map.
+If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols rotate with the maps viewport, appearing upright to the user. However, depending on your scenario, it may be desirable to lock the symbol to the map's orientation by setting the `rotationAlignment` option to `map`.
+
+If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols stay upright in the maps viewport when the map is pitched or tilted. However, depending on your scenario, it may be desirable to lock the symbol to the map's pitch by setting the `pitchAlignment` option to `map`.
**Why isn't any of my data appearing on the map?**
Things to check:
* Check the console in the browser's developer tools for errors. * Ensure that a data source has been created and added to the map, and that the data source has been connected to a rendering layer that has also been added to the map.
-* Add break points in your code and step through it to ensure data is being added to the data source and the data source and layers are being added to the map without any errors occurring.
+* Add break points in your code and step through it. Ensure data is added to the data source and the data source and layers are added to the map.
* Try removing data-driven expressions from your rendering layer. It's possible that one of them may have an error in it that is causing the issue. **Can I use the Azure Maps Web SDK in a sandboxed iframe?** Yes.
-> [!TIP]
-> Safari has a [bug](https://bugs.webkit.org/show_bug.cgi?id=170075) that prevents sandboxed iframes from running web workers, a requirement of the Azure Maps Web SDK. The solution is to add the `"allow-same-origin"` tag to the sandbox property of the iframe.
- ## Get support The following are the different ways to get support for Azure Maps depending on your issue. **How do I report a data issue or an issue with an address?**
-Report data issues using the [Azure Maps data feedback tool](https://feedback.azuremaps.com). Detailed instructions on reporting data issues are provided in the [Provide data feedback to Azure Maps](how-to-use-feedback-tool.md) article.
+Report issues using the [Azure Maps feedback] site. Detailed instructions on reporting data issues are provided in the [Provide data feedback to Azure Maps] article.
> [!NOTE] > Each issue submitted generates a unique URL to track it. Resolution times vary depending on issue type and the time required to verify the change is correct. The changes will appear in the render services weekly update, while other services such as geocoding and routing are updated monthly. **How do I report a bug in a service or API?**
-Report issues on Azure's [Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) page by selecting the **Create a support request** button.
+Report issues on Azure's [Help + support] page by selecting the **Create a support request** button.
**Where do I get technical help for Azure Maps?**
-* For questions related to the Azure Maps Power BI visual, contact [Power BI support](https://powerbi.microsoft.com/support/).
+* For questions related to the Azure Maps Power BI visual, contact [Power BI support].
-* For all other Azure Maps services, contact [Azure support](https://azure.com/support).
+* For all other Azure Maps services, contact [Azure support].
-* For question or comments on specific Azure Maps Features, use the [Azure Maps developer forums](/answers/topics/azure-maps.html).
+* For question or comments on specific Azure Maps Features, use the [Azure Maps developer forums].
## Next steps See the following articles for more tips on improving the user experience in your application. > [!div class="nextstepaction"]
-> [Make your application accessible](map-accessibility.md)
+> [Make your application accessible]
Learn more about the terminology used by Azure Maps and the geospatial industry. > [!div class="nextstepaction"]
-> [Azure Maps glossary](glossary.md)
+> [Azure Maps glossary]
+
+[Authentication and authorization best practices]: authentication-best-practices.md
+[awesome-vector-tiles]: https://github.com/mapbox/awesome-vector-tiles#awesome-vector-tiles-
+[Azure Maps Creator platform]: creator-indoor-maps.md
+[Azure Maps developer forums]: /answers/topics/azure-maps.html
+[Azure Maps feedback]: https://feedback.azuremaps.com
+[Azure Maps glossary]: glossary.md
+[Azure support]: https://azure.com/support
+[azure-maps-control]: https://www.npmjs.com/package/azure-maps-control?activeTab=versions
+[bug]: https://bugs.webkit.org/show_bug.cgi?id=170075
+[Clustering and the heat maps layer]: clustering-point-data-web-sdk.md#clustering-and-the-heat-maps-layer
+[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md
+[Create a data source]: create-data-source-web-sdk.md
+[Data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[data-driven styles]: data-driven-style-expressions-web-sdk.md
+[Help + support]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview
+[Make your application accessible]: map-accessibility.md
+[Power BI support]: https://powerbi.microsoft.com/support
+[Provide data feedback to Azure Maps]: how-to-use-feedback-tool.md
+[supported browser]: supported-browsers.md
+[Tippecanoe]: https://github.com/mapbox/tippecanoe
+[useful tools for working with GeoJSON data]: https://github.com/tmcw/awesome-geojson
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn how to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 03/22/2023 Last updated : 05/14/2023
Legacy table: availabilityResults
|name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |sdkVersion|string|SDKVersion|string|
Legacy table: browserTimings
|networkDuration|real|NetworkDurationMs|real| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |processingDuration|real|ProcessingDurationMs|real|
Legacy table: dependencies
|name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |resultCode|string|ResultCode|string|
Legacy table: customEvents
|name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |sdkVersion|string|SDKVersion|string| |session_Id|string|SessionId|string|
Legacy table: customMetrics
|name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |sdkVersion|string|SDKVersion|string| |session_Id|string|SessionId|string|
Legacy table: pageViews
|name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |sdkVersion|string|SDKVersion|string|
Legacy table: performanceCounters
|name|string|Name|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |sdkVersion|string|SDKVersion|string| |session_Id|string|SessionId|string|
Legacy table: requests
|name|string|Name|String| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|String| |resultCode|string|ResultCode|String|
Legacy table: exceptions
|method|string|Method|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |outerAssembly|string|OuterAssembly|string| |outerMessage|string|OuterMessage|string|
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Be aware that:
To make it easier to change the instrumentation key as the code moves between stages of production, reference the key dynamically in code instead of using a hardcoded or static value.
-Set the key in an initialization method, such as `global.aspx.cs`, in an ASP.NET service:
+Set the key in an initialization method, such as `global.asax.cs`, in an ASP.NET service:
```csharp protected void Application_Start()
azure-resource-manager Key Vault Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/key-vault-access.md
Title: Use Azure Key Vault when deploying Managed Applications
description: Shows how to access secrets in Azure Key Vault when deploying Managed Applications. Previously updated : 10/04/2022 Last updated : 04/14/2023 # Access Key Vault secret when deploying Azure Managed Applications
This article describes how to configure the Key Vault to work with Managed Appli
:::image type="content" source="./media/key-vault-access/open-key-vault.png" alt-text="Screenshot of the Azure home page to open a key vault using search or by selecting key vault.":::
-1. Select **Access policies**.
+1. Select **Access configuration**.
- :::image type="content" source="./media/key-vault-access/select-access-policies.png" alt-text="Screenshot of the key vault setting to select access policies.":::
+ :::image type="content" source="./media/key-vault-access/select-access-configuration.png" alt-text="Screenshot of the key vault setting to select access configuration.":::
-1. Select **Azure Resource Manager for template deployment**. Then, select **Save**.
+1. Select **Azure Resource Manager for template deployment**. Then, select **Apply**.
- :::image type="content" source="./media/key-vault-access/enable-template.png" alt-text="Screenshot of the key vault's access policies that enable Azure Resource Manager for template deployment.":::
+ :::image type="content" source="./media/key-vault-access/enable-template.png" alt-text="Screenshot of the key vault's access configuration that enables Azure Resource Manager for template deployment.":::
## Add service as contributor
-Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope. For detailed steps, go to [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-The **Appliance Resource Provider** is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can see if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
+The **Appliance Resource Provider** is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can verify if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
## Reference Key Vault secret
To pass a secret from a Key Vault to a template in your Managed Application, you
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
+ "apiVersion": "2022-09-01",
"name": "dynamicSecret", "properties": { "mode": "Incremental",
To pass a secret from a Key Vault to a template in your Managed Application, you
"resources": [ { "type": "Microsoft.Sql/servers",
- "apiVersion": "2022-02-01-preview",
+ "apiVersion": "2022-05-01-preview",
"name": "[variables('sqlServerName')]", "location": "[parameters('location')]", "properties": {
To pass a secret from a Key Vault to a template in your Managed Application, you
You've configured your Key Vault to be accessible during deployment of a Managed Application. -- For information about passing a value from a Key Vault as a template parameter, see [Use Azure Key Vault to pass secure parameter value during deployment](../templates/key-vault-parameter.md).-- To learn more about key vault security, see [Azure Key Vault security](../../key-vault/general/security-features.md) and [Authentication in Azure Key Vault](../../key-vault/general/authentication.md).-- For managed application examples, see [Sample projects for Azure managed applications](sample-projects.md).-- To learn how to create a UI definition file for a managed application, see [Get started with CreateUiDefinition](create-uidefinition-overview.md).
+- For information about passing a value from a Key Vault as a template parameter, go to [Use Azure Key Vault to pass secure parameter value during deployment](../templates/key-vault-parameter.md).
+- To learn more about key vault security, go to [Azure Key Vault security](../../key-vault/general/security-features.md) and [Authentication in Azure Key Vault](../../key-vault/general/authentication.md).
+- For managed application examples, go to [Sample projects for Azure managed applications](sample-projects.md).
+- To learn how to create a UI definition file for a managed application, go to [Get started with CreateUiDefinition](create-uidefinition-overview.md).
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
Title: The Azure Video Indexer connectors with Logic App and Power Automate. description: This tutorial shows how to unlock new experiences and monetization opportunities Azure Video Indexer connectors with Logic App and Power Automate. -+ Last updated 09/21/2020
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Title: Monitoring Azure Video Indexer data reference #Required; *your official service name*
+ Title: Monitoring Azure Video Indexer data reference
description: Important reference material needed when you monitor Azure Video Indexer -+ --++ Previously updated : 05/10/2022 #Required; mm/dd/yyyy format. Last updated : 05/10/2022 <!-- VERSION 2.3 Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
Title: Monitoring Azure Video Indexer #Required; Must be "Monitoring *Azure Video Indexer*
-description: Start here to learn how to monitor Azure Video Indexer #Required;
+ Title: Monitoring Azure Video Indexer
+description: Start here to learn how to monitor Azure Video Indexer
---+++ Previously updated : 12/19/2022 #Required; mm/dd/yyyy format. Last updated : 12/19/2022 <!-- VERSION 2.2
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Last updated 3/16/2023
Microsoft will regularly apply important updates to the Azure VMware Solution for new features and software lifecycle management. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
+## April 2023
+
+Introducing run commands for HCX on Azure VMware solutions. You can use these run commands to restart HCX cloud manager in your Azure VMware solution private cloud. Additionally, you can also scale HCX cloud manager using run commands. To learn how to use run commands for HCX, see [Use HCX Run commands](use-hcx-run-commands.md).
## February 2023
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
Azure VMware Solution supports the following operations:
- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md)
+- [Use HCX Run commands](use-hcx-run-commands.md)
+ >[!NOTE] >Run commands are executed one at a time in the order submitted.
bastion Connect Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md
-# Connect to a VM via specified private IP address through the portal
+# Connect to a VM via specified private IP address
IP-based connection lets you connect to your on-premises, non-Azure, and Azure virtual machines via Azure Bastion over ExpressRoute or a VPN site-to-site connection using a specified private IP address. The steps in this article show you how to configure your Bastion deployment, and then connect to an on-premises resource using IP-based connection. For more information about Azure Bastion, see the [Overview](bastion-overview.md).
Before you begin these steps, verify that you have the following environment set
1. Select **Apply** to apply the changes. It takes a few minutes for the Bastion configuration to complete.
-## Connect to VM
+## Connect to VM - Azure portal
1. To connect to a VM using a specified private IP address, you make the connection from Bastion to the VM, not directly from the VM page. On your Bastion page, select **Connect** to open the Connect page.
Before you begin these steps, verify that you have the following environment set
1. Select **Connect** to connect to your virtual machine.
+## Connect to VM - native client
+
+You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunnelling. Note that this feature does not support Azure Active Directory authentication or custom port and protocol at the moment. To learn more about configuring native client support, see [Connect to a VM - native client](connect-native-client-windows.md). Use the following commands as examples:
+
+ **RDP:**
+
+ ```azurecli
+ az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>
+ ```
+
+ **SSH:**
+
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ ```
+
+ **Tunnel:**
+
+ ```azurecli
+ az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
+ ```
++ ## Next steps
-Read the [Bastion FAQ](bastion-faq.md) for additional information.
+Read the [Bastion FAQ](bastion-faq.md) for additional information.
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
This connection supports file upload from the local computer to the target VM. F
ssh <username>@127.0.0.1 -p <LocalMachinePort> ```
+## <a name="connect-IP"></a>Connect to VM - IP Address
+
+This section helps you connect to your on-premises, non-Azure, and Azure virtual machines via Azure Bastion using a specified private IP address from native client. You can replace `--target-resource-id` with `--target-ip-address` in any of the above commands with the specified IP address to connect to your VM.
+
+> [!Note]
+> This feature does not support support Azure AD authentication or custom port and protocol at the moment. For more information on IP-based connection, see [Connect to a VM - IP address](connect-ip-address.md).
+
+Use the following commands as examples:
++
+ **RDP:**
+
+ ```azurecli
+ az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>
+ ```
+
+ **SSH:**
+
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ ```
+
+ **Tunnel:**
+
+ ```azurecli
+ az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
+ ```
++ ## Next steps [Upload or download files](vm-upload-download-native.md)
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Automatically scale compute nodes in an Azure Batch pool
-description: Enable automatic scaling on a cloud pool to dynamically adjust the number of compute nodes in the pool.
+ Title: Autoscale compute nodes in an Azure Batch pool
+description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool.
Previously updated : 04/06/2023 Last updated : 04/12/2023
-# Create an automatic formula for scaling compute nodes in a Batch pool
+
+# Create a formula to automatically scale compute nodes in a Batch pool
Azure Batch can automatically scale pools based on parameters that you define, saving you time and money. With automatic scaling, Batch dynamically adds nodes to a pool as task demands increase, and removes compute nodes as task demands decrease.
-To enable automatic scaling on a pool of compute nodes, you associate the pool with an *autoscale formula* that you define. The Batch service uses the autoscale formula to determine how many nodes are needed to execute your workload. These nodes may be dedicated nodes or [Azure Spot nodes](batch-spot-vms.md). Batch periodically reviews service metrics data and uses it to adjust the number of nodes in the pool based on your formula and at an interval that you define.
+To enable automatic scaling on a pool of compute nodes, you associate the pool with an *autoscale formula* that you define. The Batch service uses the autoscale formula to determine how many nodes are needed to execute your workload. These nodes can be dedicated nodes or [Azure Spot nodes](batch-spot-vms.md). Batch periodically reviews service metrics data and uses it to adjust the number of nodes in the pool based on your formula and at an interval that you define.
-You can enable automatic scaling when you create a pool, or apply it to an existing pool. Batch enables you to evaluate your formulas before assigning them to pools and to monitor the status of automatic scaling runs. Once you configure a pool with automatic scaling, you can make changes to the formula later.
+You can enable automatic scaling when you create a pool, or apply it to an existing pool. Batch lets you evaluate your formulas before assigning them to pools and to monitor the status of automatic scaling runs. Once you configure a pool with automatic scaling, you can make changes to the formula later.
> [!IMPORTANT]
-> When you create a Batch account, you can specify the [pool allocation mode](accounts.md), which determines whether pools are allocated in a Batch service subscription (the default) or in your user subscription. If you created your Batch account with the default Batch service configuration, then your account is limited to a maximum number of cores that can be used for processing. The Batch service scales compute nodes only up to that core limit. For this reason, the Batch service may not reach the target number of compute nodes specified by an autoscale formula. See [Quotas and limits for the Azure Batch service](batch-quota-limit.md) for information on viewing and increasing your account quotas.
+> When you create a Batch account, you can specify the [pool allocation mode](accounts.md), which determines whether pools are allocated in a Batch service subscription (the default) or in your user subscription. If you created your Batch account with the default Batch service configuration, then your account is limited to a maximum number of cores that can be used for processing. The Batch service scales compute nodes only up to that core limit. For this reason, the Batch service might not reach the target number of compute nodes specified by an autoscale formula. To learn how to view and increase your account quotas, see [Quotas and limits for the Azure Batch service](batch-quota-limit.md).
> >If you created your account with user subscription mode, then your account shares in the core quota for the subscription. For more information, see [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limits) in [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
You can enable automatic scaling when you create a pool, or apply it to an exist
An autoscale formula is a string value that you define that contains one or more statements. The autoscale formula is assigned to a pool's [autoScaleFormula](/rest/api/batchservice/enable-automatic-scaling-on-a-pool) element (Batch REST) or [CloudPool.AutoScaleFormula](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleformula) property (Batch .NET). The Batch service uses your formula to determine the target number of compute nodes in the pool for the next interval of processing. The formula string can't exceed 8 KB, can include up to 100 statements that are separated by semicolons, and can include line breaks and comments.
-You can think of automatic scaling formulas as a Batch autoscale "language." Formula statements are free-formed expressions that can include both service-defined variables (defined by the Batch service) and user-defined variables. Formulas can perform various operations on these values by using built-in types, operators, and functions. For example, a statement might take the following form:
+You can think of automatic scaling formulas as a Batch autoscale "language." Formula statements are free-formed expressions that can include both *service-defined variables*, which are defined by the Batch service, and *user-defined variables*. Formulas can perform various operations on these values by using built-in types, operators, and functions. For example, a statement might take the following form:
``` $myNewVariable = function($ServiceDefinedVariable, $myCustomVariable); ```
-Formulas generally contain multiple statements that perform operations on values that are obtained in previous statements. For example, first we obtain a value for `variable1`, then pass it to a function to populate `variable2`:
+Formulas generally contain multiple statements that perform operations on values that are obtained in previous statements. For example, first you obtain a value for `variable1`, then pass it to a function to populate `variable2`:
``` $variable1 = function1($ServiceDefinedVariable);
$variable2 = function2($OtherServiceDefinedVariable, $variable1);
Include these statements in your autoscale formula to arrive at a target number of compute nodes. Dedicated nodes and Spot nodes each have their own target settings. An autoscale formula can include a target value for dedicated nodes, a target value for Spot nodes, or both.
-The target number of nodes may be higher, lower, or the same as the current number of nodes of that type in the pool. Batch evaluates a pool's autoscale formula at a specific [automatic scaling intervals](#automatic-scaling-interval). Batch adjusts the target number of each type of node in the pool to the number that your autoscale formula specifies at the time of evaluation.
+The target number of nodes might be higher, lower, or the same as the current number of nodes of that type in the pool. Batch evaluates a pool's autoscale formula at specific [automatic scaling intervals](#automatic-scaling-interval). Batch adjusts the target number of each type of node in the pool to the number that your autoscale formula specifies at the time of evaluation.
### Sample autoscale formulas
-Below are examples of two autoscale formulas, which can be adjusted to work for most scenarios. The variables `startingNumberOfVMs` and `maxNumberofVMs` in the example formulas can be adjusted to your needs.
+The following examples show two autoscale formulas, which can be adjusted to work for most scenarios. The variables `startingNumberOfVMs` and `maxNumberofVMs` in the example formulas can be adjusted to your needs.
#### Pending tasks
$NodeDeallocationOption = taskcompletion;
#### Preempted nodes
-This example creates a pool that starts with 25 Spot nodes. Every time a Spot node is preempted, it's replaced with a dedicated node. As with the first example, the `maxNumberofVMs` variable prevents the pool from exceeding 25 VMs. This example is useful for taking advantage of Spot VMs while also ensuring that only a fixed number of pre-emptions occur for the lifetime of the pool.
+This example creates a pool that starts with 25 Spot nodes. Every time a Spot node is preempted, it's replaced with a dedicated node. As with the first example, the `maxNumberofVMs` variable prevents the pool from exceeding 25 VMs. This example is useful for taking advantage of Spot VMs while also ensuring that only a fixed number of preemptions occur for the lifetime of the pool.
``` maxNumberofVMs = 25;
$TargetLowPriorityNodes = min(maxNumberofVMs , maxNumberofVMs - $TargetDedicated
$NodeDeallocationOption = taskcompletion; ```
-You'll learn more about [how to create autoscale formulas](#write-an-autoscale-formula) and see more [example autoscale formulas](#example-autoscale-formulas) later in this topic.
+You'll learn more about [how to create autoscale formulas](#write-an-autoscale-formula) and see more [example autoscale formulas](#example-autoscale-formulas) later in this article.
## Variables
-You can use both **service-defined** and **user-defined** variables in your autoscale formulas.
+You can use both *service-defined* and *user-defined* variables in your autoscale formulas.
The service-defined variables are built in to the Batch service. Some service-defined variables are read-write, and some are read-only.
-User-defined variables are variables that you define. In the example formula shown above, `$TargetDedicatedNodes` and `$PendingTasks` are service-defined variables, while `startingNumberOfVMs` and `maxNumberofVMs` are user-defined variables.
+User-defined variables are variables that you define. In the previous example, `$TargetDedicatedNodes` and `$PendingTasks` are service-defined variables, while `startingNumberOfVMs` and `maxNumberofVMs` are user-defined variables.
> [!NOTE] > Service-defined variables are always preceded by a dollar sign ($). For user-defined variables, the dollar sign is optional.
You can get and set the values of these service-defined variables to manage the
| Variable | Description | | | |
-| $TargetDedicatedNodes |The target number of dedicated compute nodes for the pool. This is specified as a target because a pool may not always achieve the desired number of nodes. For example, if the target number of dedicated nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool may not reach the target. <br /><br /> A pool in an account created in Batch service mode may not achieve its target if the target exceeds a Batch account node or core quota. A pool in an account created in user subscription mode may not achieve its target if the target exceeds the shared core quota for the subscription.|
-| $TargetLowPriorityNodes |The target number of Spot compute nodes for the pool. This specified as a target because a pool may not always achieve the desired number of nodes. For example, if the target number of Spot nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool may not reach the target. A pool may also not achieve its target if the target exceeds a Batch account node or core quota. <br /><br /> For more information on Spot compute nodes, see [Use Spot VMs with Batch](batch-spot-vms.md). |
-| $NodeDeallocationOption |The action that occurs when compute nodes are removed from a pool. Possible values are:<ul><li>**requeue**: The default value. Ends tasks immediately and puts them back on the job queue so that they're rescheduled. This action ensures the target number of nodes is reached as quickly as possible. However, it may be less efficient, as any running tasks are interrupted and restarted. <li>**terminate**: Ends tasks immediately and removes them from the job queue.<li>**taskcompletion**: Waits for currently running tasks to finish and then removes the node from the pool. Use this option to avoid tasks being interrupted and requeued, wasting any work the task has done.<li>**retaineddata**: Waits for all the local task-retained data on the node to be cleaned up before removing the node from the pool.</ul> |
+| $TargetDedicatedNodes |The target number of dedicated compute nodes for the pool. Specified as a target because a pool might not always achieve the desired number of nodes. For example, if the target number of dedicated nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool might not reach the target. <br><br> A pool in an account created in Batch service mode might not achieve its target if the target exceeds a Batch account node or core quota. A pool in an account created in user subscription mode might not achieve its target if the target exceeds the shared core quota for the subscription.|
+| $TargetLowPriorityNodes |The target number of Spot compute nodes for the pool. Specified as a target because a pool might not always achieve the desired number of nodes. For example, if the target number of Spot nodes is modified by an autoscale evaluation before the pool has reached the initial target, the pool might not reach the target. A pool might also not achieve its target if the target exceeds a Batch account node or core quota. <br><br> For more information on Spot compute nodes, see [Use Spot VMs with Batch](batch-spot-vms.md). |
+| $NodeDeallocationOption |The action that occurs when compute nodes are removed from a pool. Possible values are:<br>- **requeue**: The default value. Ends tasks immediately and puts them back on the job queue so that they're rescheduled. This action ensures the target number of nodes is reached as quickly as possible. However, it might be less efficient, because any running tasks are interrupted and then must be restarted. <br>- **terminate**: Ends tasks immediately and removes them from the job queue.<br>- **taskcompletion**: Waits for currently running tasks to finish and then removes the node from the pool. Use this option to avoid tasks being interrupted and requeued, wasting any work the task has done.<br>- **retaineddata**: Waits for all the local task-retained data on the node to be cleaned up before removing the node from the pool. |
> [!NOTE]
-> The `$TargetDedicatedNodes` variable can also be specified using the alias `$TargetDedicated`. Similarly, the `$TargetLowPriorityNodes` variable can be specified using the alias `$TargetLowPriority`. If both the fully named variable and its alias are set by the formula, the value assigned to the fully named variable will take precedence.
+> The `$TargetDedicatedNodes` variable can also be specified using the alias `$TargetDedicated`. Similarly, the `$TargetLowPriorityNodes` variable can be specified using the alias `$TargetLowPriority`. If both the fully named variable and its alias are set by the formula, the value assigned to the fully named variable takes precedence.
### Read-only service-defined variables You can get the value of these service-defined variables to make adjustments that are based on metrics from the Batch service. > [!IMPORTANT]
-> Job release tasks aren't currently included in variables that provide task counts, such as $ActiveTasks and $PendingTasks. Depending on your autoscale formula, this can result in nodes being removed with no nodes available to run job release tasks.
+> Job release tasks aren't currently included in variables that provide task counts, such as `$ActiveTasks` and `$PendingTasks`. Depending on your autoscale formula, this can result in nodes being removed with no nodes available to run job release tasks.
> [!TIP] > These read-only service-defined variables are *objects* that provide various methods to access data associated with each. For more information, see [Obtain sample data](#obtain-sample-data) later in this article.
You can get the value of these service-defined variables to make adjustments tha
| $NetworkInBytes |The number of inbound bytes. Retiring after 2024-Mar-31. | | $NetworkOutBytes |The number of outbound bytes. Retiring after 2024-Mar-31. | | $SampleNodeCount |The count of compute nodes. Retiring after 2024-Mar-31. |
-| $ActiveTasks |The number of tasks that are ready to execute but aren't yet executing. This includes all tasks that are in the active state and whose dependencies have been satisfied. Any tasks that are in the active state but whose dependencies haven't been satisfied are excluded from the $ActiveTasks count. For a multi-instance task, $ActiveTasks includes the number of instances set on the task.|
+| $ActiveTasks |The number of tasks that are ready to execute but aren't yet executing. This includes all tasks that are in the active state and whose dependencies have been satisfied. Any tasks that are in the active state but whose dependencies haven't been satisfied are excluded from the `$ActiveTasks` count. For a multi-instance task, `$ActiveTasks` includes the number of instances set on the task.|
| $RunningTasks |The number of tasks in a running state. |
-| $PendingTasks |The sum of $ActiveTasks and $RunningTasks. |
+| $PendingTasks |The sum of `$ActiveTasks` and `$RunningTasks`. |
| $SucceededTasks |The number of tasks that finished successfully. | | $FailedTasks |The number of tasks that failed. | | $TaskSlotsPerNode |The number of task slots that can be used to run concurrent tasks on a single compute node in the pool. |
You can get the value of these service-defined variables to make adjustments tha
> before this date. > [!WARNING]
-> `$PreemptedNodeCount` is currently not available and will return `0` valued data.
+> `$PreemptedNodeCount` is currently not available and returns `0` valued data.
> [!NOTE] > Use `$RunningTasks` when scaling based on the number of tasks running at a point in time, and `$ActiveTasks` when scaling based on the number of tasks that are queued up to run.
Testing a double with a ternary operator (`double ? statement1 : statement2`), r
## Functions
-You can use these predefined **functions** when defining an autoscale formula.
+You can use these predefined *functions* when defining an autoscale formula.
| Function | Return type | Description | | | | |
The *doubleVecList* value is converted to a single *doubleVec* before evaluation
## Metrics
-You can use both resource and task metrics when you're defining a formula. You adjust the target number of dedicated nodes in the pool based on the metrics data that you obtain and evaluate. For more information on each metric, see the [Variables](#variables) section above.
-
-<table>
- <tr>
- <th>Metric</th>
- <th>Description</th>
- </tr>
- <tr>
- <td><b>Resource</b></td>
- <td><p>Resource metrics are based on the CPU, the bandwidth, the memory usage of compute nodes, and the number of nodes.</p>
- <p> These service-defined variables are useful for making adjustments based on node count:</p>
- <p><ul>
- <li>$TargetDedicatedNodes</li>
- <li>$TargetLowPriorityNodes</li>
- <li>$CurrentDedicatedNodes</li>
- <li>$CurrentLowPriorityNodes</li>
- <li>$PreemptedNodeCount</li>
- <li>$SampleNodeCount</li>
- </ul></p>
- <p>These service-defined variables are useful for making adjustments based on node resource usage:</p>
- <p><ul>
- <li>$CPUPercent</li>
- <li>$WallClockSeconds</li>
- <li>$MemoryBytes</li>
- <li>$DiskBytes</li>
- <li>$DiskReadBytes</li>
- <li>$DiskWriteBytes</li>
- <li>$DiskReadOps</li>
- <li>$DiskWriteOps</li>
- <li>$NetworkInBytes</li>
- <li>$NetworkOutBytes</li></ul></p>
- </tr>
- <tr>
- <td><b>Task</b></td>
- <td><p>Task metrics are based on the status of tasks, such as Active, Pending, and Completed. The following service-defined variables are useful for making pool-size adjustments based on task metrics:</p>
- <p><ul>
- <li>$ActiveTasks</li>
- <li>$RunningTasks</li>
- <li>$PendingTasks</li>
- <li>$SucceededTasks</li>
- <li>$FailedTasks</li></ul></p>
- </td>
- </tr>
-</table>
+You can use both resource and task metrics when you define a formula. You adjust the target number of dedicated nodes in the pool based on the metrics data that you obtain and evaluate. For more information on each metric, see the [Variables](#variables) section.
+
+| Metric | Description |
+|-|--|
+| Resource | Resource metrics are based on the CPU, the bandwidth, the memory usage of compute nodes, and the number of nodes.<br><br>These service-defined variables are useful for making adjustments based on node count:<br>- $TargetDedicatedNodes <br>- $TargetLowPriorityNodes <br>- $CurrentDedicatedNodes <br>- $CurrentLowPriorityNodes <br>- $PreemptedNodeCount <br>- $SampleNodeCount <br><br>These service-defined variables are useful for making adjustments based on node resource usage: <br>- $CPUPercent <br>- $WallClockSeconds <br>- $MemoryBytes <br>- $DiskBytes <br>- $DiskReadBytes <br>- $DiskWriteBytes <br>- $DiskReadOps <br>- $DiskWriteOps <br>- $NetworkInBytes <br>- $NetworkOutBytes |
+| Task | Task metrics are based on the status of tasks, such as Active, Pending, and Completed. The following service-defined variables are useful for making pool-size adjustments based on task metrics: <br>- $ActiveTasks <br>- $RunningTasks <br>- $PendingTasks <br>- $SucceededTasks <br>- $FailedTasks |
## Obtain sample data
-The core operation of an autoscale formula is to obtain task and resource metric data (samples), and then adjust pool size based on that data. As such, it's important to have a clear understanding of how autoscale formulas interact with samples.
+The core operation of an autoscale formula is to obtain task and resource metrics data (samples), and then adjust pool size based on that data. As such, it's important to have a clear understanding of how autoscale formulas interact with samples.
### Methods
-Autoscale formulas act on samples of metric data provided by the Batch service. A formula grows or shrinks the pool compute nodes based on the values that it obtains. Service-defined variables are objects that provide methods to access data that is associated with that object. For example, the following expression shows a request to get the last five minutes of CPU usage:
+Autoscale formulas act on samples of metric data provided by the Batch service. A formula grows or shrinks the pool compute nodes based on the values that it obtains. Service-defined variables are objects that provide methods to access data that's associated with that object. For example, the following expression shows a request to get the last five minutes of CPU usage:
``` $CPUPercent.GetSample(TimeInterval_Minute * 5) ```
-The following methods may be used to obtain sample data about service-defined variables.
+The following methods can be used to obtain sample data about service-defined variables.
| Method | Description | | | |
-| GetSample() |The `GetSample()` method returns a vector of data samples.<br/><br/>A sample is 30 seconds worth of metrics data. In other words, samples are obtained every 30 seconds. But as noted below, there's a delay between when a sample is collected and when it's available to a formula. As such, not all samples for a given time period may be available for evaluation by a formula.<ul><li>`doubleVec GetSample(double count)`: Specifies the number of samples to obtain from the most recent samples that were collected. `GetSample(1)` returns the last available sample. For metrics like `$CPUPercent`, however, `GetSample(1)` shouldn't be used, because it's impossible to know *when* the sample was collected. It could be recent, or, because of system issues, it might be much older. In such cases, it's better to use a time interval as shown below.<li>`doubleVec GetSample((timestamp or timeinterval) startTime [, double samplePercent])`: Specifies a time frame for gathering sample data. Optionally, it also specifies the percentage of samples that must be available in the requested time frame. For example, `$CPUPercent.GetSample(TimeInterval_Minute * 10)` would return 20 samples if all samples for the last 10 minutes are present in the `CPUPercent` history. If the last minute of history wasn't available, only 18 samples would be returned. In this case `$CPUPercent.GetSample(TimeInterval_Minute * 10, 95)` would fail because only 90 percent of the samples are available, but `$CPUPercent.GetSample(TimeInterval_Minute * 10, 80)` would succeed.<li>`doubleVec GetSample((timestamp or timeinterval) startTime, (timestamp or timeinterval) endTime [, double samplePercent])`: Specifies a time frame for gathering data, with both a start time and an end time. As mentioned above, there's a delay between when a sample is collected and when it becomes available to a formula. Consider this delay when you use the `GetSample` method. See `GetSamplePercent` below. |
+| GetSample() |The `GetSample()` method returns a vector of data samples.<br><br>A sample is 30 seconds worth of metrics data. In other words, samples are obtained every 30 seconds. But as noted below, there's a delay between when a sample is collected and when it's available to a formula. As such, not all samples for a given time period might be available for evaluation by a formula. <br><br>- `doubleVec GetSample(double count)`: Specifies the number of samples to obtain from the most recent samples that were collected. `GetSample(1)` returns the last available sample. For metrics like `$CPUPercent`, however, `GetSample(1)` shouldn't be used, because it's impossible to know *when* the sample was collected. It could be recent, or, because of system issues, it might be much older. In such cases, it's better to use a time interval as shown below.<br><br>- `doubleVec GetSample((timestamp or timeinterval) startTime [, double samplePercent])`: Specifies a time frame for gathering sample data. Optionally, it also specifies the percentage of samples that must be available in the requested time frame. For example, `$CPUPercent.GetSample(TimeInterval_Minute * 10)` would return 20 samples if all samples for the last 10 minutes are present in the `CPUPercent` history. If the last minute of history wasn't available, only 18 samples would be returned. In this case `$CPUPercent.GetSample(TimeInterval_Minute * 10, 95)` would fail because only 90 percent of the samples are available, but `$CPUPercent.GetSample(TimeInterval_Minute * 10, 80)` would succeed.<br><br>- `doubleVec GetSample((timestamp or timeinterval) startTime, (timestamp or timeinterval) endTime [, double samplePercent])`: Specifies a time frame for gathering data, with both a start time and an end time. As mentioned above, there's a delay between when a sample is collected and when it becomes available to a formula. Consider this delay when you use the `GetSample` method. See `GetSamplePercent` below. |
| GetSamplePeriod() |Returns the period of samples that were taken in a historical sample data set. |
-| Count() |Returns the total number of samples in the metric history. |
+| Count() |Returns the total number of samples in the metrics history. |
| HistoryBeginTime() |Returns the time stamp of the oldest available data sample for the metric. | | GetSamplePercent() |Returns the percentage of samples that are available for a given time interval. For example, `doubleVec GetSamplePercent( (timestamp or timeinterval) startTime [, (timestamp or timeinterval) endTime] )`. Because the `GetSample` method fails if the percentage of samples returned is less than the `samplePercent` specified, you can use the `GetSamplePercent` method to check first. Then you can perform an alternate action if insufficient samples are present, without halting the automatic scaling evaluation. | ### Samples
-The Batch service periodically takes samples of task and resource metrics and makes them available to your autoscale formulas. These samples are recorded every 30 seconds by the Batch service. However, there's typically a delay between when those samples were recorded and when they're made available to (and read by) your autoscale formulas. Additionally, samples may not be recorded for a particular interval because of factors such as network or other infrastructure issues.
+The Batch service periodically takes samples of task and resource metrics and makes them available to your autoscale formulas. These samples are recorded every 30 seconds by the Batch service. However, there's typically a delay between when those samples were recorded and when they're made available to (and read by) your autoscale formulas. Additionally, samples might not be recorded for a particular interval because of factors such as network or other infrastructure issues.
### Sample percentage
-When `samplePercent` is passed to the `GetSample()` method or the `GetSamplePercent()` method is called, _percent_ refers to a comparison between the total possible number of samples recorded by the Batch service and the number of samples that are available to your autoscale formula.
+When `samplePercent` is passed to the `GetSample()` method or the `GetSamplePercent()` method is called, *percent* refers to a comparison between the total possible number of samples recorded by the Batch service and the number of samples that are available to your autoscale formula.
-Let's look at a 10-minute timespan as an example. Because samples are recorded every 30 seconds within that 10-minute timespan, the maximum total number of samples recorded by Batch would be 20 samples (2 per minute). However, due to the inherent latency of the reporting mechanism and other issues within Azure, there may be only 15 samples that are available to your autoscale formula for reading. So, for example, for that 10-minute period, only 75% of the total number of samples recorded may be available to your formula.
+Let's look at a 10-minute time span as an example. Because samples are recorded every 30 seconds within that 10-minute time span, the maximum total number of samples recorded by Batch would be 20 samples (2 per minute). However, due to the inherent latency of the reporting mechanism and other issues within Azure, there might be only 15 samples that are available to your autoscale formula for reading. So, for example, for that 10-minute period, only 75 percent of the total number of samples recorded might be available to your formula.
### GetSample() and sample ranges
-Your autoscale formulas grow or shrink your pools by adding or removing nodes. Because nodes cost you money, be sure that your formulas use an intelligent method of analysis that is based on sufficient data. We recommend that you use a trending-type analysis in your formulas. This type grows and shrinks your pools based on a range of collected samples.
+Your autoscale formulas grow and shrink your pools by adding or removing nodes. Because nodes cost you money, be sure that your formulas use an intelligent method of analysis that's based on sufficient data. It's recommended that you use a trending-type analysis in your formulas. This type grows and shrinks your pools based on a range of collected samples.
To do so, use `GetSample(interval look-back start, interval look-back end)` to return a vector of samples:
When Batch evaluates the above line, it returns a range of samples as a vector o
$runningTasksSample=[1,1,1,1,1,1,1,1,1,1]; ```
-Once you've collected the vector of samples, you can then use functions like `min()`, `max()`, and `avg()` to derive meaningful values from the collected range.
+After you collect the vector of samples, you can then use functions like `min()`, `max()`, and `avg()` to derive meaningful values from the collected range.
To exercise extra caution, you can force a formula evaluation to fail if less than a certain sample percentage is available for a particular time period. When you force a formula evaluation to fail, you instruct Batch to cease further evaluation of the formula if the specified percentage of samples isn't available. In this case, no change is made to the pool size. To specify a required percentage of samples for the evaluation to succeed, specify it as the third parameter to `GetSample()`. Here, a requirement of 75 percent of samples is specified:
To exercise extra caution, you can force a formula evaluation to fail if less th
$runningTasksSample = $RunningTasks.GetSample(60 * TimeInterval_Second, 120 * TimeInterval_Second, 75); ```
-Because there may be a delay in sample availability, you should always specify a time range with a look-back start time that is older than one minute. It takes approximately one minute for samples to propagate through the system, so samples in the range `(0 * TimeInterval_Second, 60 * TimeInterval_Second)` may not be available. Again, you can use the percentage parameter of `GetSample()` to force a particular sample percentage requirement.
+Because there might be a delay in sample availability, you should always specify a time range with a look-back start time that's older than one minute. It takes approximately one minute for samples to propagate through the system, so samples in the range `(0 * TimeInterval_Second, 60 * TimeInterval_Second)` might not be available. Again, you can use the percentage parameter of `GetSample()` to force a particular sample percentage requirement.
> [!IMPORTANT]
-> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you have, no matter how long ago you retrieved it." Since it's only a single sample, and it may be an older sample, it may not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on.
+> We strongly recommend that you **avoid relying *only* on `GetSample(1)` in your autoscale formulas**. This is because `GetSample(1)` essentially says to the Batch service, "Give me the last sample you have, no matter how long ago you retrieved it." Since it's only a single sample, and it might be an older sample, it might not be representative of the larger picture of recent task or resource state. If you do use `GetSample(1)`, make sure that it's part of a larger statement and not the only data point that your formula relies on.
## Write an autoscale formula
-You build an autoscale formula by forming statements that use the above components, then combine those statements into a complete formula. In this section, we create an example autoscale formula that can perform real-world scaling decisions and make adjustments.
+You build an autoscale formula by forming statements that use the above components, then combine those statements into a complete formula. In this section, you create an example autoscale formula that can perform real-world scaling decisions and make adjustments.
First, let's define the requirements for our new autoscale formula. The formula should:
First, let's define the requirements for our new autoscale formula. The formula
- Always restrict the maximum number of dedicated nodes to 400. - When reducing the number of nodes, don't remove nodes that are running tasks; if necessary, wait until tasks have finished before removing nodes.
-The first statement in our formula increases the number of nodes during high CPU usage. We define a statement that populates a user-defined variable (`$totalDedicatedNodes`) with a value that is 110 percent of the current target number of dedicated nodes, but only if the minimum average CPU usage during the last 10 minutes was above 70 percent. Otherwise, it uses the value for the current number of dedicated nodes.
+The first statement in the formula increases the number of nodes during high CPU usage. You define a statement that populates a user-defined variable (`$totalDedicatedNodes`) with a value that is 110 percent of the current target number of dedicated nodes, but only if the minimum average CPU usage during the last 10 minutes was above 70 percent. Otherwise, it uses the value for the current number of dedicated nodes.
``` $totalDedicatedNodes =
$totalDedicatedNodes =
($CurrentDedicatedNodes * 1.1) : $CurrentDedicatedNodes; ```
-To decrease the number of dedicated nodes during low CPU usage, the next statement in our formula sets the same `$totalDedicatedNodes` variable to 90 percent of the current target number of dedicated nodes, if average CPU usage in the past 60 minutes was under 20 percent. Otherwise, it uses the current value of `$totalDedicatedNodes` that we populated in the statement above.
+To decrease the number of dedicated nodes during low CPU usage, the next statement in the formula sets the same `$totalDedicatedNodes` variable to 90 percent of the current target number of dedicated nodes, if average CPU usage in the past 60 minutes was under 20 percent. Otherwise, it uses the current value of `$totalDedicatedNodes` populated in the statement above.
``` $totalDedicatedNodes =
$totalDedicatedNodes =
($CurrentDedicatedNodes * 0.9) : $totalDedicatedNodes; ```
-Now, we limit the target number of dedicated compute nodes to a maximum of 400.
+Now, limit the target number of dedicated compute nodes to a maximum of 400.
```
-$TargetDedicatedNodes = min(400, $totalDedicatedNodes)
+$TargetDedicatedNodes = min(400, $totalDedicatedNodes);
```
-Finally, we ensure that nodes aren't removed until their tasks are finished.
+Finally, ensure that nodes aren't removed until their tasks are finished.
``` $NodeDeallocationOption = taskcompletion;
$totalDedicatedNodes =
$totalDedicatedNodes = (avg($CPUPercent.GetSample(TimeInterval_Minute * 60)) < 0.2) ? ($CurrentDedicatedNodes * 0.9) : $totalDedicatedNodes;
-$TargetDedicatedNodes = min(400, $totalDedicatedNodes)
+$TargetDedicatedNodes = min(400, $totalDedicatedNodes);
$NodeDeallocationOption = taskcompletion; ``` > [!NOTE]
-> If you choose to, you can include both comments and line breaks in formula strings. Also be aware that missing semicolons may result in evaluation errors.
+> If you choose, you can include both comments and line breaks in formula strings. Also be aware that missing semicolons might result in evaluation errors.
## Automatic scaling interval
Pool autoscaling can be configured using any of the [Batch SDKs](batch-apis-tool
To create a pool with autoscaling enabled in .NET, follow these steps: 1. Create the pool with [BatchClient.PoolOperations.CreatePool](/dotnet/api/microsoft.azure.batch.pooloperations.createpool).
-1. Set the [CloudPool.AutoScaleEnabled](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleenabled) property to `true`.
+1. Set the [CloudPool.AutoScaleEnabled](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleenabled) property to **true**.
1. Set the [CloudPool.AutoScaleFormula](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleformula) property with your autoscale formula. 1. (Optional) Set the [CloudPool.AutoScaleEvaluationInterval](/dotnet/api/microsoft.azure.batch.cloudpool.autoscaleevaluationinterval) property (default is 15 minutes). 1. Commit the pool with [CloudPool.Commit](/dotnet/api/microsoft.azure.batch.cloudpool.commit) or [CommitAsync](/dotnet/api/microsoft.azure.batch.cloudpool.commitasync).
await pool.CommitAsync();
``` > [!IMPORTANT]
-> When you create an autoscale-enabled pool, don't specify the _targetDedicatedNodes_ parameter or the _targetLowPriorityNodes_ parameter on the call to **CreatePool**. Instead, specify the **AutoScaleEnabled** and **AutoScaleFormula** properties on the pool. The values for these properties determine the target number of each type of node.
+> When you create an autoscale-enabled pool, don't specify the *targetDedicatedNodes* parameter or the *targetLowPriorityNodes* parameter on the call to `CreatePool`. Instead, specify the `AutoScaleEnabled` and `AutoScaleFormula` properties on the pool. The values for these properties determine the target number of each type of node.
> > To manually resize an autoscale-enabled pool (for example, with [BatchClient.PoolOperations.ResizePoolAsync](/dotnet/api/microsoft.azure.batch.pooloperations.resizepoolasync)), you must first disable automatic scaling on the pool, then resize it.
+> [!TIP]
+> For more examples of using the .NET SDK, see the [Batch .NET Quickstart repository](https://github.com/Azure-Samples/batch-dotnet-quickstart) on GitHub.
+ ### Python
-To create autoscale-enabled pool with the Python SDK:
+To create an autoscale-enabled pool with the Python SDK:
1. Create a pool and specify its configuration. 1. Add the pool to the service client.
response = batch_service_client.pool.enable_auto_scale(pool_id, auto_scale_formu
``` > [!TIP]
-> More examples of using the Python SDK can be found in the [Batch Python Quickstart repository](https://github.com/Azure-Samples/batch-python-quickstart) on GitHub.
+> For more examples of using the Python SDK, see the [Batch Python Quickstart repository](https://github.com/Azure-Samples/batch-python-quickstart) on GitHub.
## Enable autoscaling on an existing pool
When you enable autoscaling on an existing pool, keep in mind:
- If you omit either the autoscale formula or interval, the Batch service continues to use the current value of that setting. > [!NOTE]
-> If you specified values for the *targetDedicatedNodes* or *targetLowPriorityNodes* parameters of the **CreatePool** method when you created the pool in .NET, or for the comparable parameters in another language, then those values are ignored when the autoscale formula is evaluated.
+> If you specified values for the *targetDedicatedNodes* or *targetLowPriorityNodes* parameters of the `CreatePool` method when you created the pool in .NET, or for the comparable parameters in another language, then those values are ignored when the autoscale formula is evaluated.
This C# example uses the [Batch .NET](/dotnet/api/microsoft.azure.batch) library to enable autoscaling on an existing pool.
Before you can evaluate an autoscale formula, you must first enable autoscaling
In this REST API request, specify the pool ID in the URI, and the autoscale formula in the *autoScaleFormula* element of the request body. The response of the operation contains any error information that might be related to the formula.
-This [Batch .NET](/dotnet/api/microsoft.azure.batch) example evaluates an autoscale formula. If the pool doesn't already use autoscaling, we enable it first.
+The following [Batch .NET](/dotnet/api/microsoft.azure.batch) example evaluates an autoscale formula. If the pool doesn't already use autoscaling, enable it first.
```csharp // First obtain a reference to an existing pool
CloudPool pool = await batchClient.PoolOperations.GetPoolAsync("myExistingPool")
// You can't evaluate an autoscale formula on a non-autoscale-enabled pool. if (pool.AutoScaleEnabled == false) {
- // We need a valid autoscale formula to enable autoscaling on the
+ // You need a valid autoscale formula to enable autoscaling on the
// pool. This formula is valid, but won't resize the pool: await pool.EnableAutoScaleAsync( autoscaleFormula: "$TargetDedicatedNodes = $CurrentDedicatedNodes;", autoscaleEvaluationInterval: TimeSpan.FromMinutes(5)); // Batch limits EnableAutoScaleAsync calls to once every 30 seconds.
- // Because we want to apply our new autoscale formula below if it
- // evaluates successfully, and we *just* enabled autoscaling on
- // this pool, we pause here to ensure we pass that threshold.
+ // Because you want to apply our new autoscale formula below if it
+ // evaluates successfully, and you *just* enabled autoscaling on
+ // this pool, pause here to ensure you pass that threshold.
Thread.Sleep(TimeSpan.FromSeconds(31)); // Refresh the properties of the pool so that we've got the
if (pool.AutoScaleEnabled == false)
await pool.RefreshAsync(); }
-// We must ensure that autoscaling is enabled on the pool prior to
+// You must ensure that autoscaling is enabled on the pool prior to
// evaluating a formula if (pool.AutoScaleEnabled == true) {
AutoScaleRun.Results:
## Get information about autoscale runs
-It's recommended to periodically check the Batch service's evaluation of your autoscale formula. To do so, get
-(or refresh) a reference to the pool, then examine the properties of its last autoscale run.
+It's recommended to periodically check the Batch service's evaluation of your autoscale formula. To do so, get (or refresh) a reference to the pool, then examine the properties of its last autoscale run.
In Batch .NET, the [CloudPool.AutoScaleRun](/dotnet/api/microsoft.azure.batch.cloudpool.autoscalerun) property has several properties that provide information about the latest automatic scaling run performed on the pool:
In Batch .NET, the [CloudPool.AutoScaleRun](/dotnet/api/microsoft.azure.batch.cl
- [AutoScaleRun.Results](/dotnet/api/microsoft.azure.batch.autoscalerun.results) - [AutoScaleRun.Error](/dotnet/api/microsoft.azure.batch.autoscalerun.error)
-In the REST API, the [Get information about a pool](/rest/api/batchservice/get-information-about-a-pool) request returns information about the pool, which includes the latest automatic scaling run information in the [autoScaleRun](/rest/api/batchservice/get-information-about-a-pool) property.
+In the REST API, the [Get information about a pool request](/rest/api/batchservice/get-information-about-a-pool) returns information about the pool, which includes the latest automatic scaling run information in the [autoScaleRun](/rest/api/batchservice/get-information-about-a-pool) property.
-The following C# example uses the Batch .NET library to print information about the last autoscaling run on pool _myPool_.
+The following C# example uses the Batch .NET library to print information about the last autoscaling run on pool *myPool*.
```csharp await Cloud pool = myBatchClient.PoolOperations.GetPoolAsync("myPool");
Error:
You can also check automatic scaling history by querying [PoolAutoScaleEvent](batch-pool-autoscale-event.md). Batch emits this event to record each occurrence of autoscale formula evaluation and execution, which can be helpful to troubleshoot potential issues. Sample event for PoolAutoScaleEvent:+ ```json { "id": "poolId",
$TargetDedicatedNodes = $isWorkingWeekdayHour ? 20:10;
$NodeDeallocationOption = taskcompletion; ```
-`$curTime` can be adjusted to reflect your local time zone by adding `time()` to the product of `TimeZoneInterval_Hour` and your UTC offset. For instance, use `$curTime = time() + (-6 * TimeInterval_Hour);` for Mountain Daylight Time (MDT). Keep in mind that the offset would need to be adjusted at the start and end of daylight saving time (if applicable).
+`$curTime` can be adjusted to reflect your local time zone by adding `time()` to the product of `TimeZoneInterval_Hour` and your UTC offset. For instance, use `$curTime = time() + (-6 * TimeInterval_Hour);` for Mountain Daylight Time (MDT). Keep in mind that the offset needs to be adjusted at the start and end of daylight saving time, if applicable.
### Example 2: Task-based adjustment
-In this C# example, the pool size is adjusted based on the number of tasks in the queue. We've included both comments and line breaks in the formula strings.
+In this C# example, the pool size is adjusted based on the number of tasks in the queue. Both comments and line breaks are included in the formula strings.
```csharp // Get pending tasks for the past 15 minutes. $samples = $PendingTasks.GetSamplePercent(TimeInterval_Minute * 15);
-// If we have fewer than 70 percent data points, we use the last sample point,
-// otherwise we use the maximum of last sample point and the history average.
+// If you have fewer than 70 percent data points, use the last sample point,
+// otherwise use the maximum of last sample point and the history average.
$tasks = $samples < 70 ? max(0,$PendingTasks.GetSample(1)) : max( $PendingTasks.GetSample(1), avg($PendingTasks.GetSample(TimeInterval_Minute * 15))); // If number of pending tasks is not 0, set targetVM to pending tasks, otherwise // half of current dedicated.
$NodeDeallocationOption = taskcompletion;
### Example 3: Accounting for parallel tasks
-This C# example adjusts the pool size based on the number of tasks. This formula also takes into account the [TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) value that has been set for the pool. This approach is useful in situations where [parallel task execution](batch-parallel-node-tasks.md) has been enabled on your pool.
+This C# example adjusts the pool size based on the number of tasks. This formula also takes into account the [TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) value that's been set for the pool. This approach is useful in situations where [parallel task execution](batch-parallel-node-tasks.md) has been enabled on your pool.
```csharp // Determine whether 70 percent of the samples have been recorded in the past
Specifically, this formula does the following:
- Sets the initial pool size to four nodes. - Doesn't adjust the pool size within the first 10 minutes of the pool's lifecycle. - After 10 minutes, obtains the max value of the number of running and active tasks within the past 60 minutes.
- - If both values are 0 (indicating that no tasks were running or active in the last 60 minutes), the pool size is set to 0.
+ - If both values are 0, indicating that no tasks were running or active in the last 60 minutes, the pool size is set to 0.
- If either value is greater than zero, no change is made. ```csharp
string formula = string.Format(@"
## Next steps - Learn how to [execute multiple tasks simultaneously on the compute nodes in your pool](batch-parallel-node-tasks.md). Along with autoscaling, this can help to lower job duration for some workloads, saving you money.-- Learn how to [query the Azure Batch service efficiently](batch-efficient-list-queries.md) for further efficiency.
+- Learn how to [query the Azure Batch service efficiently](batch-efficient-list-queries.md).
batch Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-terraform.md
Title: 'Quickstart: Create an Azure Batch account using Terraform'
description: 'In this article, you create an Azure Batch account using Terraform' Previously updated : 4/1/2023- Last updated : 4/14/2023+
cdn Create Profile Endpoint Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-terraform.md
description: 'In this article, you create an Azure CDN profile and endpoint usin
Previously updated : 4/12/2023- Last updated : 4/14/2023+
cognitive-services Cognitive Services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-data-loss-prevention.md
Title: Data Loss Prevention #Required; page title is displayed in search results. Include the brand.
-description: Cognitive Services Data Loss Prevention capabilities allow customers to configure the list of outbound URLs their Cognitive Services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss. #Required; article description that is displayed in search results.
---- Previously updated : 03/31/2023 #Required; mm/dd/yyyy format.-
+ Title: Data Loss Prevention
+description: Cognitive Services Data Loss Prevention capabilities allow customers to configure the list of outbound URLs their Cognitive Services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss.
++++ Last updated : 03/31/2023+ # Configure data loss prevention for Azure Cognitive Services
cognitive-services Create Account Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-terraform.md
keywords: cognitive services, cognitive solutions, cognitive intelligence, cogni
Previously updated : 3/29/2023- Last updated : 4/14/2023+
cognitive-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md
Last updated 06/30/2022-+ keywords:
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
Title: What's new in Azure Communication Services #Required; page title is displayed in search results. Include the brand.
-description: All of the latest additions to Azure Communication Services #Required; article description that is displayed in search results.
---- Previously updated : 03/12/2023 #Required; mm/dd/yyyy format.-
+ Title: What's new in Azure Communication Services
+description: All of the latest additions to Azure Communication Services
++++ Last updated : 03/12/2023+
container-apps Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/alerts.md
Title: Set up alerts in Azure Container Apps description: Set up alerts to monitor your container app. -+ Last updated 08/30/2022-+ # Set up alerts in Azure Container Apps
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
Title: 'Tutorial: Enable Azure Container Apps on Azure Arc-enabled Kubernetes' description: 'Tutorial: learn how to set up Azure Container Apps in your Azure Arc-enabled Kubernetes clusters.' -+ Last updated 3/24/2023-+ # Tutorial: Enable Azure Container Apps on Azure Arc-enabled Kubernetes (Preview)
container-apps Container Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/container-console.md
Title: Connect to a container console in Azure Container Apps description: Connect to a container console in your container app. -+ Last updated 08/30/2022-+
container-apps Containerapp Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containerapp-up.md
Title: Deploy Azure Container Apps with the az containerapp up command description: How to deploy a container app with the az containerapp up command -+ Last updated 11/08/2022-+ # Deploy Azure Container Apps with the az containerapp up command
container-apps Dapr Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-github-actions.md
Title: Tutorial - Deploy a Dapr application with GitHub Actions for Azure Container Apps description: Learn about multiple revision management by deploying a Dapr application with GitHub Actions and Azure Container Apps. --++
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 01/25/2023 Last updated : 04/14/2023 # Dapr integration with Azure Container Apps
This guide provides insight into core Dapr concepts and details regarding the Da
| [**Secrets**][dapr-secrets] | Access secrets from your application code or reference secure values in your Dapr components. | > [!NOTE]
-> The above table covers stable Dapr APIs. To learn more about using alpha APIs and features, [see limitations](#unsupported-dapr-capabilities).
+> The above table covers stable Dapr APIs. To learn more about using alpha APIs and features, [see the Dapr FAQ][dapr-faq].
## Dapr concepts overview
This resource defines a Dapr component called `dapr-pubsub` via ARM.
-## Release cadence for Dapr
-
-The latest version of Dapr in Azure Container Apps will be available within six weeks after [the Dapr OSS release][dapr-release].
- ## Limitations ### Unsupported Dapr capabilities
The latest version of Dapr in Azure Container Apps will be available within six
- **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec. - **Declarative pub/sub subscriptions** - **Any Dapr sidecar annotations not listed above**-- **Alpha APIs and components**: Azure Container Apps doesn't guarantee the availability of Dapr alpha APIs and features. If available to use, they are on a self-service, opt-in basis. Alpha APIs and components are provided "as is" and "as available," and are continually evolving as they move toward stable status. Alpha APIs and components aren't covered by customer support.
+- **Alpha APIs and components**: Azure Container Apps doesn't guarantee the availability of Dapr alpha APIs and features. For more information, refer to the [Dapr FAQ][dapr-faq].
### Known limitations
Now that you've learned about Dapr and some of the challenges it solves:
- Try [Deploying a Dapr application to Azure Container Apps using the Azure CLI][dapr-quickstart] or [Azure Resource Manager][dapr-arm-quickstart]. - Walk through a tutorial [using GitHub Actions to automate changes for a multi-revision, Dapr-enabled container app][dapr-github-actions]. - Learn how to [perform event-driven work using Dapr bindings][dapr-bindings-tutorial]
+- [Answer common questions about the Dapr integration with Azure Container Apps][dapr-faq]
<!-- Links Internal -->
Now that you've learned about Dapr and some of the challenges it solves:
[dapr-arm-quickstart]: ./microservices-dapr-azure-resource-manager.md [dapr-github-actions]: ./dapr-github-actions.md [dapr-bindings-tutorial]: ./microservices-dapr-bindings.md
+[dapr-faq]: ./faq.yml#dapr
<!-- Links External -->
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Title: 'Quickstart: Deploy your first container app with containerapp up' description: Deploy your first application to Azure Container Apps using the Azure CLI containerapp up command. -+ Last updated 03/29/2023-+ ms.devlang: azurecli
container-apps Log Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-monitoring.md
Title: Monitor logs in Azure Container Apps with Log Analytics description: Monitor your container app logs with Log Analytics -+ Last updated 08/30/2022-+ # Monitor logs in Azure Container Apps with Log Analytics
container-apps Log Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-options.md
Title: Log storage and monitoring options in Azure Container Apps description: Description of logging options in Azure Container Apps -+ Last updated 09/29/2022-+ # Log storage and monitoring options in Azure Container Apps
container-apps Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-streaming.md
Title: View log streams in Azure Container Apps description: View your container app's log stream. -+ Last updated 03/24/2023-+ # View log streams in Azure Container Apps
container-apps Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/logging.md
Title: Application logging in Azure Container Apps description: Description of logging in Azure Container Apps -+ Last updated 09/29/2022-+ # Application Logging in Azure Container Apps
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
Title: Azure Container Apps image pull from Azure Container Registry with managed identity description: Set up Azure Container Apps to authenticate Azure Container Registry image pulls with managed identity -+ Last updated 09/16/2022-+ zone_pivot_groups: container-apps-interface-types
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
Title: Managed identities in Azure Container Apps description: Using managed identities in Container Apps -+ Last updated 09/29/2022-+ # Managed identities in Azure Container Apps
container-apps Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/metrics.md
Title: Monitor Azure Container Apps metrics description: Monitor your running apps metrics -+ Last updated 08/30/2022-+ # Monitor Azure Container Apps metrics
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
Title: Observability in Azure Container Apps description: Monitor your running app in Azure Container Apps -+ Last updated 07/29/2022-+ # Observability in Azure Container Apps
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources.--++ Last updated 02/21/2023
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
Title: "Quickstart: Build and deploy your app from a repository to Azure Container Apps" description: Build your container app from a local or GitHub source repository and deploy in Azure Container Apps using az containerapp up. -+ Last updated 03/29/2023-+ zone_pivot_groups: container-apps-image-build-from-repo
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
Title: 'Quickstart: Deploy your first container app using the Azure portal' description: Deploy your first application to Azure Container Apps using the Azure portal. -+ Last updated 12/13/2021-+
container-apps Service Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-connector.md
Title: Connect a container app to a cloud service with Service Connector description: Learn to connect a container app to an Azure service using the Azure portal or the CLI.--++ Last updated 06/16/2022
container-instances Container Instances Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-terraform.md
Title: 'Quickstart: Create an Azure Container Instance with a public IP address
description: 'In this article, you create an Azure Container Instance with a public IP address using Terraform' Previously updated : 3/16/2023- Last updated : 4/14/2023+
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
description: Azure Cosmos DB's point-in-time restore feature helps to recover da
Previously updated : 03/02/2023 Last updated : 03/31/2023
The time window available for restore (also known as retention period) is the lo
The selected option depends on the chosen tier of continuous backup. The point in time for restore can be any timestamp within the retention period no further back than the point when the resource was created. In strong consistency mode, backups taken in the write region are more up to date when compared to the read regions. Read regions can lag behind due to network or other transient issues. While doing restore, you can [get the latest restorable timestamp](get-latest-restore-timestamp.md) for a given resource in a specific region. Getting the latest timestamp ensures that the resource has taken backups up to the given timestamp, and can restore in that region.
-Currently, you can restore an Azure Cosmos DB account (API for NoSQL or MongoDB) contents at a specific point in time to another account. You can perform this restore operation via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). API for Table or Gremlin are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell).
+Currently, you can restore an Azure Cosmos DB account (API for NoSQL or MongoDB, API for Table, API for Gremlin) contents at a specific point in time to another account. You can perform this restore operation via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (Azure CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template).
## Backup storage redundancy
Currently the point in time restore functionality has the following limitations:
* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. API for Cassandra isn't supported now.
-* API for Table and Gremlin are in preview and supported via PowerShell and Azure CLI.
- * Multi-regions write accounts aren't supported. * Currently Azure Synapse Link isn't fully compatible with continuous backup mode. For more information about backup with analytical store, see [analytical store backup](analytical-store-introduction.md#backup).
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
description: Learn how to isolate and restrict the restore permissions for conti
Previously updated : 02/17/2023 Last updated : 03/31/2023
# Manage permissions to restore an Azure Cosmos DB account [!INCLUDE[NoSQL, MongoDB, Gremlin, Table](includes/appliesto-nosql-mongodb-gremlin-table.md)]
-Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope or more granularly at the source account scope as shown in the following image:
+Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. These permissions can be applied at the subscription scope or more granularly at the source account scope as shown in the following image:
:::image type="content" source="./media/continuous-backup-restore-permissions/restore-roles-permissions.svg" alt-text="List of roles required to perform restore operation." border="false":::
Scope is a set of resources that have access, to learn more on scopes, see the [
## Assign roles for restore using the Azure portal
-To perform a restore, a user or a principal need the permission to restore (that is *restore/action* permission), and permission to provision a new account (that is *write* permission). To grant these permissions, the owner can assign the `CosmosRestoreOperator` and `Cosmos DB Operator` built in roles to a principal.
+To perform a restore, a user or a principal need the permission to restore (that is *restore/action* permission), and permission to provision a new account (that is *write* permission). To grant these permissions, the owner of the subscription can assign the `CosmosRestoreOperator` and `Cosmos DB Operator` built in roles to a principal.
1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your subscription. The `CosmosRestoreOperator` role is available at subscription level.
Following permissions are required to perform the different activities pertainin
Roles with permission can be assigned to different scopes to achieve granular control on who can perform the restore operation within a subscription or a given account. ### Assign capability to restore from any restorable account in a subscription-- Assign a user write action on the specific resource group. This action is required to create a new account in the resource group.-- Assign the `CosmosRestoreOperator` built in role to the specific restorable database account that needs to be restored. In the following command, the scope for the `RestorableDatabaseAccount` is extracted from the `ID` property of result of execution of `az cosmosdb restorable-database-account list`(if using CLI) or `Get-AzCosmosDBRestorableDatabaseAccount`(if using the PowerShell)
-Assign the `CosmosRestoreOperator` built-in role at subscription level
+- Assign the `CosmosRestoreOperator` built in role to the specific subscription level
```azurecli-interactive az role assignment create --role "CosmosRestoreOperator" --assignee <email> --scope /subscriptions/<subscriptionId> ```
-### Assign capability to restore from a specific account
-This operation is currently not supported.
+### Assign capability to restore from a specific account
+- Assign a user write action on the specific resource group. This action is required to create a new account in the resource group.
+- Assign the `CosmosRestoreOperator` built in role to the specific restorable database account that needs to be restored. In the following command, the scope for the `RestorableDatabaseAccount` is extracted from the `ID` property of result of execution of `az cosmosdb restorable-database-account list`(if using CLI) or `Get-AzCosmosDBRestorableDatabaseAccount`(if using the PowerShell)
+
+```azurecli-interactive
+az role assignment create --role "CosmosRestoreOperator" --assignee <email> --scope <RestorableDatabaseAccount>
+```
### Assign capability to restore from any source account in a resource group. This operation is currently not supported.
cosmos-db Periodic Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-restore-introduction.md
Previously updated : 03/21/2023 Last updated : 04/02/2023
For Azure Synapse Link enabled accounts, analytical store data isn't included in
Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
-For example, consider a scenario where Backup Retention is configured to **240 hrs** (or **10 days**) and Backup Interval is configured to **24 hours**. This configuration implies that there are **10** copies of the backup data. If you have **1 TB** of data in an Azure region, the cost for backup storage in a given month would be: `0.12 * 1000 * 8`
+For example, consider a scenario where Backup Retention is configured to **240 hrs** (or **10 days**) and Backup Interval is configured to **24 hours**. This configuration implies that there are **10** copies of the backup data. If you have **1 TB** of data in an Azure West US region, the cost for backup storage in a given month would be: `0.12 * 1000 * 8`
## Required permissions to manage retention or restoration
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
Microsoft offers a wide range of tools for optimizing your costs. Some of these
- There are many [**free services**](https://azure.microsoft.com/pricing/free-services/) available in Azure. Be sure to pay close attention to the constraints. Different services are free indefinitely, for 12 months, or 30 days. Some are free up to a specific amount of usage and some may have dependencies on other services that aren't free. - The [**Azure pricing calculator**](https://azure.microsoft.com/pricing/calculator/) is the best place to start when planning a new deployment. You can tweak many aspects of the deployment to understand how you'll be charged for that service and identify which SKUs/options will keep you within your desired price range. For more information about pricing for each of the services you use, see [pricing details](https://azure.microsoft.com/pricing/). - [**Azure Advisor cost recommendations**](./costs/tutorial-acm-opt-recommendations.md) should be your first stop when interested in optimizing existing resources. Advisor recommendations are updated daily and are based on your usage patterns. Advisor is available for subscriptions and resource groups. Management group users can also see recommendations but will need to select the desired subscriptions. Billing users can only see recommendations for subscriptions they have resource access to.-- [**Azure saving plans**](./savings-plan/index.yml) save you money when you have consistent usage of Azure compute resources. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices.
+- [**Azure savings plans**](./savings-plan/index.yml) save you money when you have consistent usage of Azure compute resources. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices.
- [**Azure reservations**](https://azure.microsoft.com/reservations/) help you save up to 72% compared to pay-as-you-go rates by pre-committing to specific usage amounts for a set time duration. - [**Azure Hybrid Benefit**](https://azure.microsoft.com/pricing/hybrid-benefit/) helps you significantly reduce costs by using on-premises Windows Server and SQL Server licenses or RedHat and SUSE Linux subscriptions on Azure.
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
Once you know which resources you'd like to group, use the following steps to ta
2. Select **Properties** in the resource menu. 3. Find the **Resource ID** property and copy its value. 4. Open **All resources** or the resource group that has the resources you want to link.
-5. Select the checkboxes for every resource you want to link and click the **Assign tags** command.
+5. Select the checkboxes for every resource you want to link and then select the **Assign tags** command.
6. Specify a tag key of "cm-resource-parent" (make sure it's typed correctly) and paste the resource ID from step 3. 7. Wait 24 hours for new usage to be sent to Cost Management with the tags. (Keep in mind resources must be actively running with charges for tags to be updated in Cost Management.) 8. Open the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview.
Cost insights surface important details about your subscriptions, like potential
## View cost for your resources
-Cost analysis is available from every management group, subscription, resource group, and billing scope in the Azure portal and the Microsoft 365 admin center. To make cost data more readily accessible for resource owners, you can now find a **View cost** link at the top-right of every resource overview screen, in **Essentials**. Clicking the link will open classic cost analysis with a resource filter applied.
+Cost analysis is available from every management group, subscription, resource group, and billing scope in the Azure portal and the Microsoft 365 admin center. To make cost data more readily accessible for resource owners, you can now find a **View cost** link at the top-right of every resource overview screen, in **Essentials**. Select the link to open classic cost analysis with a resource filter applied.
The view cost link is enabled by default in the [Azure preview portal](https://preview.portal.azure.com).
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 03/21/2023 Last updated : 04/14/2023
However, you can't exchange dissimilar reservations. For example, you can't exch
You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 region for one that's in West Europe region. > [!NOTE]
-> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md).
+> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to use instance size flexibility for VM sizes, but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md).
> > You may [trade-in](../savings-plan/reservation-trade-in.md) your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration youΓÇÖll need and want additional savings. Learn more about [Azure savings plan for compute and how it works with reservations](../savings-plan/index.yml).
cost-management-billing Reservation Exchange Policy Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-exchange-policy-changes.md
Previously updated : 03/27/2023 Last updated : 04/14/2023 # Changes to the Azure reservation exchange policy
Exchanges will be unavailable for all compute reservations - Azure Reserved Virt
Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plans providing the flexibility automatically, weΓÇÖre adjusting our reservations exchange policy.
-You can continue to exchange VM sizes (with instance size flexibility). However, Microsoft is ending exchanges for regions and instance series for these Azure compute reservations.
+You can continue to use instance size flexibility for VM sizes, but Microsoft is ending exchanges for regions and instance series for these Azure compute reservations.
The current cancellation policy for reserved instances isn't changing. The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment.
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
Previously updated : 03/21/2023 Last updated : 04/14/2023
The following reservations aren't eligible to be traded in for savings plans:
- SUSE Linux plans > [!NOTE]
-> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations.
+> Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to use instance size flexibility for VM sizes, but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations.
> > You may trade-in your Azure compute reservations for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration youΓÇÖll need and want additional savings. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md).
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md
Use the following information to download usage for billed charges. The same ste
1. In the invoice grid, find the row of the invoice corresponding to the usage file that you want to download. 1. Select the ellipsis symbol (`...`) at the end of the row. 1. In the context menu, select **Prepare Azure usage file**. A notification message appears stating that the usage file is being prepared.
-1. When the file is ready to download, select the **Click here to download** link in the notification. If you missed the notification, you can view it from **Notifications** area in top right of the Azure portal (the bell symbol).
+1. When the file is ready to download, select **Download**. If you missed the notification, you can view it from **Notifications** area in top right of the Azure portal (the bell symbol).
## Get usage data with Azure CLI
data-factory Data Flow Parse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-parse.md
In the parse transformation configuration panel, you'll first pick the type of d
### Column
-Similar to derived columns and aggregates, this is where you'll either modify an exiting column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the parsed source data in this column. In most cases, you'll want to define a new column that parses the incoming embedded document string field.
+Similar to derived columns and aggregates, this is where you'll either modify an existing column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the parsed source data in this column. In most cases, you'll want to define a new column that parses the incoming embedded document string field.
### Expression
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 02/16/2023 Last updated : 04/12/2023 # Manage Azure Data Factory studio preview experience
The monitoring experience remains the same as detailed [here](monitor-visually.m
#### Error message relocation to Status column
+> [!NOTE]
+> This feature is now generally available in the ADF studio.
+ To make it easier for you to view errors when you see a **Failed** pipeline run, error messages have been relocated to the **Status** column. Find the error icon in the pipeline monitoring page and in the pipeline **Output** tab after debugging your pipeline.
Find the error icon in the pipeline monitoring page and in the pipeline **Output
#### Container view > [!NOTE]
-> This feature will now be generally available in the ADF studio.
+> This feature is now generally available in the ADF studio.
When monitoring your pipeline run, you have the option to enable the container view, which will provide a consolidated view of the activities that ran. This view is available in the output of your pipeline debug run and in the detailed monitoring view found in the monitoring tab.
Click the button next to the iteration or conditional activity to collapse the n
#### Simplified default monitoring view
-The default monitoring view has been simplified with fewer default columns. You can add/remove columns if youΓÇÖd like to personalize your monitoring view. Changes to the default will be cached.
+The default monitoring view has been simplified with fewer default columns. You can add/remove columns if youΓÇÖd like to personalize your monitoring view. Changes to the default will be cached.
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-20.png" alt-text="Screenshot of the new default column view on the monitoring page.":::
Add columns by clicking **Add column** or remove columns by clicking the trashca
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-22.png" alt-text="Screenshot of the Add column button and trashcan icon to edit column view.":::
+You can also now view **Pipeline run details** in a new pane in the detailed pipeline monitoring view by clicking **View run detail**.
++ ## Provide feedback We want to hear from you! If you see this pop-up, please let us know your thoughts by providing feedback on the updates you've tested.
data-manager-for-agri Concepts Hierarchy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-hierarchy-model.md
Title: Hierarchy model in Azure Data Manager for Agriculture description: Provides information on the data model to organize your agriculture data. -+ -+ Last updated 02/14/2023-+ # Hierarchy model to organize agriculture related data
data-manager-for-agri Concepts Ingest Satellite Imagery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md
Title: Ingesting satellite data in Azure Data Manager for Agriculture description: Provides step by step guidance to ingest Satellite data-+ -+ Last updated 02/14/2023-+ # Using satellite imagery in Azure Data Manager for Agriculture
data-manager-for-agri Concepts Ingest Sensor Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-sensor-data.md
Title: Ingesting sensor data in Azure Data Manager for Agriculture description: Provides step by step guidance to ingest Sensor data.-+ -+ Last updated 02/14/2023-+ # Ingesting sensor data
data-manager-for-agri Concepts Ingest Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-weather-data.md
description: Learn how to fetch weather data from various weather data providers
-+ Last updated 02/14/2023-+ # Weather data overview
data-manager-for-agri Concepts Isv Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md
Title: ISV solution framework in Azure Data Manager for Agriculture description: Provides information on using solutions from ISVs -+ -+ Last updated 02/14/2023-+ # What is our Solution Framework?
data-manager-for-agri How To Set Up Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-audit-logs.md
Title: Enable logging for Azure Data Manager for Agriculture description: Learn how enable logging and debugging in Azure Data Manager for Agriculture-+ -+ Last updated 04/10/2023-+ # Azure Data Manager for Agriculture logging
data-manager-for-agri How To Set Up Isv Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-isv-solution.md
Title: Use ISV solutions with Data Manager for Agriculture. description: Learn how to use APIs from a third-party solution-+ -+ Last updated 02/14/2023-+ # How do I use an ISV solution?
data-manager-for-agri How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-private-links.md
Title: Creating a private endpoint for Azure Data Manager for Agriculture description: Learn how to use private links in Azure Data Manager for Agriculture-+ -+ Last updated 03/22/2023-+ # Create a private endpoint for Azure Data Manager for Agriculture
data-manager-for-agri How To Set Up Sensors Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-customer.md
Title: How to set up a sensor in Azure Data Manager for Agriculture description: Provides step by step guidance to integrate Sensor as a customer-+ -+ Last updated 02/14/2023-+ # Sensor integration as a customer
data-manager-for-agri How To Set Up Sensors Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-partner.md
description: Provides guidance to set up your sensors as a partner
-+ Last updated 02/14/2023-+ # Sensor partner integration flow
data-manager-for-agri How To Use Nutrient Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-nutrient-apis.md
Title: Use plant tissue nutrients APIs in Azure Data Manager for Agriculture description: Learn how to store nutrient data in Azure Data Manager for Agriculture-+ -+ Last updated 02/14/2023-+ # Using tissue samples data
data-manager-for-agri How To Write Weather Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-write-weather-extension.md
description: Provides guidance to use weather extension
-+ Last updated 02/14/2023-+ # How to write a weather extension
data-manager-for-agri Overview Azure Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/overview-azure-data-manager-for-agriculture.md
Title: What is Microsoft Azure Data Manager for Agriculture? #Required; page title is displayed in search results. Include the brand.
-description: About Azure Data Manager for Agriculture #Required; article description that is displayed in search results.
-
+ Title: What is Microsoft Azure Data Manager for Agriculture?
+description: About Azure Data Manager for Agriculture
+ -- Previously updated : 02/14/2023 #Required; mm/dd/yyyy format.-++ Last updated : 02/14/2023+ # What is Azure Data Manager for Agriculture Preview?
data-manager-for-agri Quickstart Install Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/quickstart-install-data-manager-for-agriculture.md
Title: How to install Azure Data Manager for Agriculture description: Provides step by step guidance to install Data Manager for Agriculture-+ Last updated 04/05/2023-+ # Quickstart install Azure Data Manager for Agriculture preview
databox-online Azure Stack Edge Pro 2 Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md
Previously updated : 02/02/2023 Last updated : 04/14/2023 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
Before you set up a compute role on your Azure Stack Edge Pro device, make sure
- Enabled a network interface for compute. - Assigned Kubernetes node IPs and Kubernetes external service IPs.
+ > [!NOTE]
+ > If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
+ ## Configure compute [!INCLUDE [configure-compute](../../includes/azure-stack-edge-gateway-configure-compute.md)]
ddos-protection Manage Ddos Protection Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-terraform.md
-+ Previously updated : 4/12/2023 Last updated : 4/14/2023 # Quickstart: Create and configure Azure DDoS Network Protection using Terraform
In this article, you learn how to:
-Name $ddos_protection_plan_name ```
-1. Get the virtual network name.
-
- ```console
- $virtual_network_name=$(terraform output -raw virtual_network_name)
- ```
-
-1. Run [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to display information about the new virtual network.
-
- ```azurepowershell
- Get-AzVirtualNetwork -ResourceGroupName $resource_group_name `
- -Name $virtual_network_name
- ```
- ## Clean up resources
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
While you can view alert details, investigate alert context, and triage and mana
|**OT network sensor consoles** | Alerts generated by that OT sensor | - View the alert's source and destination in the **Device map** <br>- View related events on the **Event timeline** <br>- Forward alerts directly to partner vendors <br>- Create alert comments <br> - Create custom alert rules <br>- Unlearn alerts | |**An on-premises management console** | Alerts generated by connected OT sensors | - Forward alerts directly to partner vendors <br> - Create alert exclusion rules |
-For more information, see [Accelerating OT alert workflows](#accelerating-ot-alert-workflows) and [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options) below.
+For more information, see:
+
+- [Alert data retention](references-data-retention.md#alert-data-retention)
+- [Accelerating OT alert workflows](#accelerating-ot-alert-workflows)
+- [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options)
Alert options also differ depending on your location and user role. For more information, see [Azure user roles and permissions](roles-azure.md) and [On-premises users and roles](roles-on-premises.md).
defender-for-iot Sample Connectivity Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/sample-connectivity-models.md
The following diagram shows an example of a ring network topology, in which each
## Sample: Linear bus and star topology
-In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches aren't monitored, and traffic that remains local to these switches won't be seen. Devices might be identified based on ARP messages, but connection information will be missing.
+In a star network such as the one shown in the diagram below, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches aren't monitored, and traffic that remains local to these switches won't be seen. Devices might be identified based on ARP messages, but connection information will be missing.
:::image type="content" source="../media/how-to-set-up-your-network/linear-bus-star-topology.png" alt-text="Diagram of the linear bus and star topology." border="false" lightbox="../media/how-to-set-up-your-network/linear-bus-star-topology.png"::: ## Sample: Multi-layer, multi-tenant network
-The following diagram is a general abstraction of a multilayer, multitenant network, with an expansive cybersecurity ecosystem typically operated by an SOC and MSSP.
-
-Typically, NTA sensors are deployed in layers 0 to 3 of the OSI model.
+The following diagram is a general abstraction of a multilayer, multi-tenant network, with an expansive cybersecurity ecosystem typically operated by an security operations center (SOC) and managed security service provider (MSSP). Defender for IoT sensors are typically deployed in layers 0 to 3 of the OSI model.
:::image type="content" source="../media/how-to-set-up-your-network/osi-model.png" alt-text="Diagram of the OSI model." lightbox="../media/how-to-set-up-your-network/osi-model.png" border="false"::: ## Next steps
-After you've [understood your own network's OT architecture](understand-network-architecture.md) and [planned out your deployment](plan-network-monitoring.md), learn more about methods for traffic mirroring and passive or active monitoring.
-
-For more information, see:
--- [Traffic mirroring methods for OT monitoring](traffic-mirroring-methods.md)
+For more information, see [Traffic mirroring methods for OT monitoring](traffic-mirroring-methods.md).
defender-for-iot Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/compliance.md
+
+ Title: Compliance - Microsoft Defender for IoT
+description: Learn about compliance resources for Microsoft Defender for IoT.
+ Last updated : 04/14/2023++
+# Microsoft Defender for IoT compliance resources
+
+Defender for IoT cloud services, formerly named *Azure Defender for IoT* or *Azure Security for IoT*, are based on Microsoft AzureΓÇÖs infrastructure, which meets demanding US government and international compliance requirements that produce formal authorizations.
+
+## Provisional authorizations
+
+Defender for IoT is in scope for the following provisional authorizations in Azure and Azure Government:
+
+- [FedRAMP High](/azure/compliance/offerings/offering-fedramp)
+- [DoD IL2](/azure/compliance/offerings/offering-dod-il2)
+
+Moreover, Defender for IoT maintains extra [DoD IL4](/azure/compliance/offerings/offering-dod-il4) and [DoD IL5](/azure/compliance/offerings/offering-dod-il5) provisional authorizations in Azure Government.
+
+For more information, see [Azure and other Microsoft cloud services compliance scope](/azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-public-services-by-audit-scope).
++
+## Accessibility
+
+Defender for IoT is committed to developing technology that empowers everyone, including people with disabilities, and helps customers address global accessibility requirements.
+
+For more information, search for *Azure Security for IoT* in [Accessibility Conformance Reports | Microsoft Accessibility](https://www.microsoft.com/accessibility/conformance-reports?rtc=1).
+
+## Cloud compliance
+
+Defender for IoT helps customers meet their compliance obligations across regulated industries and markets worldwide.
+
+For more information, see [Azure and other Microsoft cloud services compliance offerings](/azure/compliance/offerings/).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Welcome to Microsoft Defender for IoT for organizations](overview.md)
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
For more information, see
## Next steps
-> [!div class="nextstepaction"]
-> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
-
-> [!div class="nextstepaction"]
-> [View and manage alerts on your OT sensor](how-to-view-alerts.md)
-
-> [!div class="nextstepaction"]
-> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-
-> [!div class="nextstepaction"]
-> [OT monitoring alert types and descriptions](alert-engine-messages.md)
-
-> [!div class="nextstepaction"]
-> [View and manage alerts on the the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
- > [!div class="nextstepaction"] > [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
If your forwarding alert rules aren't working as expected, check the following d
## Next steps
-> [!div class="nextstepaction"]
-> [Microsoft Defender for IoT alerts](alerts.md)
-
-> [!div class="nextstepaction"]
-> [View and manage alerts on your OT sensor](how-to-view-alerts.md)
-
-> [!div class="nextstepaction"]
-> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
-
-> [!div class="nextstepaction"]
-> [OT monitoring alert types and descriptions](alert-engine-messages.md)
-- > [!div class="nextstepaction"] > [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
For more information, see [Defender for IoT sensor and management console APIs](
For more information, see:
+- [Defender for IoT device inventory](device-inventory.md)
- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) - [Device data retention periods](references-data-retention.md#device-data-retention-periods).
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
All devices detected within the range of the filter will be deleted. If you dele
For more information, see:
+- [Defender for IoT device inventory](device-inventory.md)
- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) - [Device data retention periods](references-data-retention.md#device-data-retention-periods)
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
The file is generated, and you're prompted to save it locally.
## Next steps
-> [!div class="nextstepaction"]
-> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-
-> [!div class="nextstepaction"]
-> [OT monitoring alert types and descriptions](alert-engine-messages.md)
- > [!div class="nextstepaction"] > [Microsoft Defender for IoT alerts](alerts.md)-
-> [!div class="nextstepaction"]
-> [Data retention across Microsoft Defender for IoT](references-data-retention.md)
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
The merged device that is now listed in the grid retains the details of the devi
For more information, see:
+- [Defender for IoT device inventory](device-inventory.md)
- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) - [Device data retention periods](references-data-retention.md#device-data-retention-periods).
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
If you need to open a support ticket for a locally managed sensor, upload a diag
## Next steps
-> [!div class="nextstepaction"]
-> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md)
- > [!div class="nextstepaction"] > [Define and view OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md) > [!div class="nextstepaction"]
-> [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
+> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md)
defender-for-iot How To Track Sensor Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-track-sensor-activity.md
Activity detected by your Microsoft Defender for IoT sensors is recorded in the event timeline. Activity includes alerts and alert management actions, network events, and user operations such as user sign-in or user deletion.
-The event timeline provides a chronological view and context of all network activity, to help determine the cause and effect of incidents. The timeline view makes it easy to extract information from network events, and more efficiently analyze alerts and events observed on the network. With the ability to store vast amounts of data, the event timeline view can be a valuable resource for security teams to perform investigations and gain a deeper understanding of network activity.
+The OT sensor's event timeline provides a chronological view and context of all network activity, to help determine the cause and effect of incidents. The timeline view makes it easy to extract information from network events, and more efficiently analyze alerts and events observed on the network. With the ability to store vast amounts of data, the event timeline view can be a valuable resource for security teams to perform investigations and gain a deeper understanding of network activity.
Use the event timeline during investigations, to understand and analyze the chain of events that preceded and followed an attack or incident. The centralized view of multiple security-related events on the same timeline helps to identify patterns and correlations, and enable security teams to quickly assess the impact of incidents and respond accordingly.
-Enhance your security analysis and incident investigations with the event timeline, with the following options:
+For more information, see:
- [View events on the timeline](#view-the-event-timeline)- - [Audit user activity](track-user-activity.md)- - [View and manage alerts](how-to-view-alerts.md#view-details-and-remediate-a-specific-alert)- - [Analyze programming details and changes](how-to-analyze-programming-details-changes.md) ## Permissions
-Administrator or Security Analyst permissions are required to perform the procedures described in this article.
+Before you perform the procedures described in this article, make sure that you have access to an OT sensor as an **Admin** or **Security Analyst** role. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
## View the event timeline
The maximum number of events shown in the event timeline is dependent on [the ha
## Next steps
-[Audit user activity](track-user-activity.md)
+For more information, see:
-[View details and remediate a specific alert](how-to-view-alerts.md#view-details-and-remediate-a-specific-alert)
-
-[Analyze programming details and changes](how-to-analyze-programming-details-changes.md)
+- [Audit user activity](track-user-activity.md)
+- [View details and remediate a specific alert](how-to-view-alerts.md#view-details-and-remediate-a-specific-alert)
+- [Analyze programming details and changes](how-to-analyze-programming-details-changes.md)
defender-for-iot How To View Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-alerts.md
For more information, see [Accelerating OT alert workflows](alerts.md#accelerati
## Next steps
-> [!div class="nextstepaction"]
-> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
-
-> [!div class="nextstepaction"]
-> [View and manage alerts on the the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
-
-> [!div class="nextstepaction"]
-> [Accelerate alert workflows on an OT network sensor](how-to-accelerate-alert-incident-response.md)
-
-> [!div class="nextstepaction"]
-> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-
-> [!div class="nextstepaction"]
-> [OT monitoring alert types and descriptions](alert-engine-messages.md)
-
-> [!div class="nextstepaction"]
-> [Microsoft Defender for IoT alerts](alerts.md)
- > [!div class="nextstepaction"] > [Data retention across Microsoft Defender for IoT](references-data-retention.md)
defender-for-iot On Premises Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/on-premises-sentinel.md
Title: How to connect on-premises Defender for IoT resources to Microsoft Sentinel description: Learn how to stream data into Microsoft Sentinel from an on-premises and locally-managed Microsoft Defender for IoT OT network sensor or an on-premises management console.-+ Last updated 12/26/2022-+ # Connect on-premises OT network sensors to Microsoft Sentinel
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
For more information, see the [Microsoft Defender for IoT for device builders do
Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter.
-## Compliance scope
-
-Defender for IoT cloud services (formerly *Azure Defender for IoT* or *Azure Security for IoT*) are based on Microsoft AzureΓÇÖs infrastructure, which meets demanding US government and international compliance requirements that produce formal authorizations.
-
-Specifically:
-- Defender for IoT is in scope for the following provisional authorizations in Azure and Azure Government: [FedRAMP High](/azure/compliance/offerings/offering-fedramp) and [DoD IL2](/azure/compliance/offerings/offering-dod-il2). Moreover, Defender for IoT maintains extra [DoD IL4](/azure/compliance/offerings/offering-dod-il4) and [DoD IL5](/azure/compliance/offerings/offering-dod-il5) provisional authorizations in Azure Government. For more information, see [Azure and other Microsoft cloud services compliance scope](/azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-public-services-by-audit-scope).-- Defender for IoT is committed to developing technology that empowers everyone, including people with disabilities, and helps customers address global accessibility requirements. For more information, search for *Azure Security for IoT* in [Accessibility Conformance Reports | Microsoft Accessibility](https://www.microsoft.com/accessibility/conformance-reports?rtc=1).-- Defender for IoT helps customers meet their compliance obligations across regulated industries and markets worldwide. For more information, see [Azure and other Microsoft cloud services compliance offerings](/azure/compliance/offerings/).- ## Next steps > [!div class="nextstepaction"]
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | **Sensor version 22.3.6**: <br>- [Support for transient devices](#support-for-transient-devices)<br>- [Learn DNS traffic by configuring allowlists](#learn-dns-traffic-by-configuring-allowlists)<br>- [Device data retention updates](#device-data-retention-updates)<br>- [UI enhancements when uploading SSL/TLS certificates](#ui-enhancements-when-uploading-ssltls-certificates)<br>- [Activation files expiration updates](#activation-files-expiration-updates)<br>- [UI enhancements for managing the device inventory](#ui-enhancements-for-managing-the-device-inventory)<br>- [Updated severity for all Suspicion of Malicious Activity alerts](#updated-severity-for-all-suspicion-of-malicious-activity-alerts)<br>- [Automatically resolved device notifications](#automatically-resolved-device-notifications) <br><br> **Cloud features**: <br>- [New Microsoft Sentinel incident experience for Defender for IoT](#new-microsoft-sentinel-incident-experience-for-defender-for-iot) |
+| **OT networks** | **Sensor version 22.3.6/22.3.7**: <br>- [Support for transient devices](#support-for-transient-devices)<br>- [Learn DNS traffic by configuring allowlists](#learn-dns-traffic-by-configuring-allowlists)<br>- [Device data retention updates](#device-data-retention-updates)<br>- [UI enhancements when uploading SSL/TLS certificates](#ui-enhancements-when-uploading-ssltls-certificates)<br>- [Activation files expiration updates](#activation-files-expiration-updates)<br>- [UI enhancements for managing the device inventory](#ui-enhancements-for-managing-the-device-inventory)<br>- [Updated severity for all Suspicion of Malicious Activity alerts](#updated-severity-for-all-suspicion-of-malicious-activity-alerts)<br>- [Automatically resolved device notifications](#automatically-resolved-device-notifications) <br><br>Version 22.3.7 includes the same features as 22.3.6. If you have version 22.3.6 installed, we strongly recommend that you update to version 22.3.7, which also includes important bug fixes.<br><br> **Cloud features**: <br>- [New Microsoft Sentinel incident experience for Defender for IoT](#new-microsoft-sentinel-incident-experience-for-defender-for-iot) |
### Support for transient devices
devtest-labs Create Lab Windows Vm Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/quickstarts/create-lab-windows-vm-terraform.md
Title: 'Quickstart: Create a lab in Azure DevTest Labs using Terraform' description: 'In this article, you create a Windows virtual machine in a lab within Azure DevTest Labs using Terraform' Previously updated : 2/27/2023- Last updated : 4/14/2023+
dns Dns Delegate Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-delegate-domain-azure-dns.md
Last updated 09/27/2022 -+ #Customer intent: As an experienced network administrator, I want to configure Azure DNS, so I can host DNS zones.
dns Dns Get Started Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-terraform.md
Title: 'Quickstart: Create an Azure DNS zone and record using Terraform'
description: 'In this article, you create an Azure DNS zone and record using Terraform' Previously updated : 3/17/2023- Last updated : 4/14/2023+
energy-data-services Concepts Csv Parser Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-csv-parser-ingestion.md
Title: Microsoft Azure Data Manager for Energy Preview csv parser ingestion workflow concept #Required; page title is displayed in search results. Include the brand.
-description: Learn how to use CSV parser ingestion. #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview csv parser ingestion workflow concept
+description: Learn how to use CSV parser ingestion.
++++ Last updated 02/10/2023-+ # CSV parser ingestion concepts
energy-data-services Concepts Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-ddms.md
Title: Domain data management services concepts #Required; page title is displayed in search results. Include the brand.
-description: Learn how to use Domain Data Management Services #Required; article description that is displayed in search results.
----
+ Title: Domain data management services concepts
+description: Learn how to use Domain Data Management Services
++++ Last updated 08/18/2022-+ # Domain data management service concepts
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
Title: Microsoft Azure Data Manager for Energy Preview entitlement concepts #Required; page title is displayed in search results. Include the brand.
-description: This article describes the various concepts regarding the entitlement services in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview entitlement concepts
+description: This article describes the various concepts regarding the entitlement services in Azure Data Manager for Energy Preview
++++ Last updated 02/10/2023-+ # Entitlement service
energy-data-services Concepts Index And Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-index-and-search.md
Title: Microsoft Azure Data Manager for Energy Preview - index and search workflow concepts #Required; page title is displayed in search results. Include the brand.
-description: Learn how to use indexing and search workflows #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview - index and search workflow concepts
+description: Learn how to use indexing and search workflows
++++ Last updated 02/10/2023-+ #Customer intent: As a developer, I want to understand indexing and search workflows so that I could search for ingested data in the platform.
energy-data-services Concepts Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-manifest-ingestion.md
Title: Microsoft Azure Data Manager for Energy Preview manifest ingestion concepts #Required; page title is displayed in search results. Include the brand.
-description: This article describes manifest ingestion concepts #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview manifest ingestion concepts
+description: This article describes manifest ingestion concepts
++++ Last updated 08/18/2022-+ # Manifest-based ingestion concepts
energy-data-services How To Convert Segy To Ovds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md
Title: Microsoft Azure Data Manager for Energy Preview - How to convert a segy to ovds file #Required; page title is displayed in search results. Include the brand.
-description: This article explains how to convert a SGY file to oVDS file format #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview - How to convert a segy to ovds file
+description: This article explains how to convert a SGY file to oVDS file format
++++ Last updated 08/18/2022-+ # How to convert a SEG-Y file to oVDS
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
Title: Microsoft Azure Data Manager for Energy Preview - How to convert segy to zgy file #Required; page title is displayed in search results. Include the brand.
-description: This article describes how to convert a SEG-Y file to a ZGY file #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview - How to convert segy to zgy file
+description: This article describes how to convert a SEG-Y file to a ZGY file
++++ Last updated 08/18/2022-+ # How to convert a SEG-Y file to ZGY
energy-data-services How To Enable Cors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-enable-cors.md
Title: How to enable CORS - Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand.
-description: Guide on CORS in Azure data manager for Energy and how to set up CORS #Required; article description that is displayed in search results.
---- Previously updated : 02/28/2023 #Required; mm/dd/yyyy format.-
+ Title: How to enable CORS - Azure Data Manager for Energy Preview
+description: Guide on CORS in Azure data manager for Energy and how to set up CORS
++++ Last updated : 02/28/2023+ # Use CORS for resource sharing in Azure Data Manager for Energy Preview This document is to help you as user of Azure Data Manager for Energy preview to set up CORS policies.
energy-data-services How To Generate Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-refresh-token.md
Title: How to generate a refresh token for Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand.
-description: This article describes how to generate a refresh token #Required; article description that is displayed in search results.
----
+ Title: How to generate a refresh token for Microsoft Azure Data Manager for Energy Preview
+description: This article describes how to generate a refresh token
++++ Last updated 10/06/2022-+ #Customer intent: As a developer, I want to learn how to generate a refresh token
energy-data-services How To Integrate Airflow Logs With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-airflow-logs-with-azure-monitor.md
Last updated 08/18/2022-+ # Integrate airflow logs with Azure Monitor
energy-data-services How To Integrate Elastic Logs With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-elastic-logs-with-azure-monitor.md
Last updated 08/18/2022-+ # Integrate elastic logs with Azure Monitor
energy-data-services How To Manage Data Security And Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-data-security-and-encryption.md
Title: Data security and encryption in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand.
-description: Guide on security in Azure Data Manager for Energy Preview and how to set up customer managed keys on Azure Data Manager for Energy Preview #Required; article description that is displayed in search results.
----
+ Title: Data security and encryption in Microsoft Azure Data Manager for Energy Preview
+description: Guide on security in Azure Data Manager for Energy Preview and how to set up customer managed keys on Azure Data Manager for Energy Preview
++++ Last updated 10/06/2022-+ #Customer intent: As a developer, I want to set up customer-managed keys on Azure Data Manager for Energy Preview.
energy-data-services How To Manage Legal Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-legal-tags.md
Title: How to manage legal tags in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand.
-description: This article describes how to manage legal tags in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results.
----
+ Title: How to manage legal tags in Microsoft Azure Data Manager for Energy Preview
+description: This article describes how to manage legal tags in Azure Data Manager for Energy Preview
++++ Last updated 02/20/2023-+ # How to manage legal tags
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
Title: How to manage users in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand.
-description: This article describes how to manage users in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results.
----
+ Title: How to manage users in Microsoft Azure Data Manager for Energy Preview
+description: This article describes how to manage users in Azure Data Manager for Energy Preview
++++ Last updated 08/19/2022-+ # How to manage users
energy-data-services Overview Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md
Title: Overview of domain data management services - Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand.
-description: This article provides an overview of Domain data management services #Required; article description that is displayed in search results.
---
+ Title: Overview of domain data management services - Microsoft Azure Data Manager for Energy Preview
+description: This article provides an overview of Domain data management services
+++ Last updated 09/01/2022
energy-data-services Overview Microsoft Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-microsoft-energy-data-services.md
Title: What is Microsoft Azure Data Manager for Energy Preview? #Required; page title is displayed in search results. Include the brand.
-description: This article provides an overview of Azure Data Manager for Energy Preview #Required; article description that is displayed in search results.
-
+ Title: What is Microsoft Azure Data Manager for Energy Preview?
+description: This article provides an overview of Azure Data Manager for Energy Preview
+ -- Previously updated : 02/08/2023 #Required; mm/dd/yyyy format.-++ Last updated : 02/08/2023+ # What is Azure Data Manager for Energy Preview?
energy-data-services Quickstart Create Microsoft Energy Data Services Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/quickstart-create-microsoft-energy-data-services-instance.md
Title: Create a Microsoft Azure Data Manager for Energy Preview instance #Required; page title is displayed in search results. Include the brand.
-description: Quickly create an Azure Data Manager for Energy Preview instance #Required; article description that is displayed in search results.
----
+ Title: Create a Microsoft Azure Data Manager for Energy Preview instance
+description: Quickly create an Azure Data Manager for Energy Preview instance
++++ Last updated 08/18/2022-+ # Quickstart: Create an Azure Data Manager for Energy Preview Preview instance
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Title: Release notes for Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand.
-description: This article provides release notes of Azure Data Manager for Energy Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results.
---- Previously updated : 09/20/2022 #Required; mm/dd/yyyy format.-
+ Title: Release notes for Microsoft Azure Data Manager for Energy Preview
+description: This article provides release notes of Azure Data Manager for Energy Preview releases, improvements, bug fixes, and known issues.
++++ Last updated : 09/20/2022+ # Release Notes for Azure Data Manager for Energy Preview
energy-data-services Tutorial Csv Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-csv-ingestion.md
Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a CSV parser ingestion #Required; page title is displayed in search results. Include the brand.
-description: This tutorial shows you how to perform CSV parser ingestion #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a CSV parser ingestion
+description: This tutorial shows you how to perform CSV parser ingestion
++++ Last updated 09/19/2022-+ #Customer intent: As a customer, I want to learn how to use CSV parser ingestion so that I can load CSV data into the Azure Data Manager for Energy Preview instance.
energy-data-services Tutorial Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-manifest-ingestion.md
Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a manifest-based file ingestion #Required; page title is displayed in search results. Include the brand.
-description: This tutorial shows you how to perform Manifest ingestion #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a manifest-based file ingestion
+description: This tutorial shows you how to perform Manifest ingestion
++++ Last updated 08/18/2022-+ #Customer intent: As a customer, I want to learn how to use manifest ingestion so that I can load manifest information into the Azure Data Manager for Energy Preview instance.
energy-data-services Tutorial Seismic Ddms Sdutil https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms-sdutil.md
Title: Microsoft Azure Data Manager for Energy Preview - Seismic store sdutil tutorial #Required; page title is displayed in search results. Include the brand.
-description: Information on setting up and using sdutil, a command-line interface (CLI) tool that allows users to easily interact with seismic store. #Required; article description that is displayed in search results.
----
+ Title: Microsoft Azure Data Manager for Energy Preview - Seismic store sdutil tutorial
+description: Information on setting up and using sdutil, a command-line interface (CLI) tool that allows users to easily interact with seismic store.
++++ Last updated 09/09/2022-+ #Customer intent: As a developer, I want to learn how to use sdutil so that I can load data into the seismic store.
energy-data-services Tutorial Seismic Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms.md
Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand.
-description: This tutorial shows you how to interact with Seismic DDMS Azure Data Manager for Energy Preview #Required; article description that is displayed in search results.
----
+ Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Azure Data Manager for Energy Preview
+description: This tutorial shows you how to interact with Seismic DDMS Azure Data Manager for Energy Preview
++++ Last updated 3/16/2022-+ # Tutorial: Sample steps to interact with Seismic DDMS
expressroute Expressroute Howto Routing Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md
Last updated 07/13/2022 -+ # Tutorial: Create and modify peering for an ExpressRoute circuit using the Azure portal
hdinsight Apache Hadoop Use Hive Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-curl.md
Preserve your credentials to avoid reentering them for each example. The cluste
Edit the script below by replacing `PASSWORD` with your actual password. Then enter the command. ```bash
-export password='PASSWORD'
+export PASSWORD='PASSWORD'
``` **B. PowerShell**
The actual casing of the cluster name may be different than you expect, dependin
Edit the scripts below to replace `CLUSTERNAME` with your cluster name. Then enter the command. (The cluster name for the FQDN isn't case-sensitive.) ```bash
-export clusterName=$(curl -u admin:$password -sS -G "https://CLUSTERNAME.azurehdinsight.net/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name')
-echo $clusterName
+export CLUSTER_NAME=$(curl -u admin:$PASSWORD -sS -G "https://CLUSTERNAME.azurehdinsight.net/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name')
+echo $CLUSTER_NAME
``` ```powershell
$clusterName
1. To verify that you can connect to your HDInsight cluster, use one of the following commands: ```bash
- curl -u admin:$password -G https://$clusterName.azurehdinsight.net/templeton/v1/status
+ curl -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/templeton/v1/status
``` ```powershell
$clusterName
1. The beginning of the URL, `https://$CLUSTERNAME.azurehdinsight.net/templeton/v1`, is the same for all requests. The path, `/status`, indicates that the request is to return a status of WebHCat (also known as Templeton) for the server. You can also request the version of Hive by using the following command: ```bash
- curl -u admin:$password -G https://$clusterName.azurehdinsight.net/templeton/v1/version/hive
+ curl -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/templeton/v1/version/hive
``` ```powershell
$clusterName
1. Use the following to create a table named **log4jLogs**: ```bash
- jobid=$(curl -s -u admin:$password -d user.name=admin -d execute="DROP+TABLE+log4jLogs;CREATE+EXTERNAL+TABLE+log4jLogs(t1+string,t2+string,t3+string,t4+string,t5+string,t6+string,t7+string)+ROW+FORMAT+DELIMITED+FIELDS+TERMINATED+BY+' '+STORED+AS+TEXTFILE+LOCATION+'/example/data/';SELECT+t4+AS+sev,COUNT(*)+AS+count+FROM+log4jLogs+WHERE+t4+=+'[ERROR]'+AND+INPUT__FILE__NAME+LIKE+'%25.log'+GROUP+BY+t4;" -d statusdir="/example/rest" https://$clusterName.azurehdinsight.net/templeton/v1/hive | jq -r .id)
- echo $jobid
+ JOB_ID=$(curl -s -u admin:$PASSWORD -d user.name=admin -d execute="DROP+TABLE+log4jLogs;CREATE+EXTERNAL+TABLE+log4jLogs(t1+string,t2+string,t3+string,t4+string,t5+string,t6+string,t7+string)+ROW+FORMAT+DELIMITED+FIELDS+TERMINATED+BY+' '+STORED+AS+TEXTFILE+LOCATION+'/example/data/';SELECT+t4+AS+sev,COUNT(*)+AS+count+FROM+log4jLogs+WHERE+t4+=+'[ERROR]'+AND+INPUT__FILE__NAME+LIKE+'%25.log'+GROUP+BY+t4;" -d statusdir="/example/rest" https://$CLUSTER_NAME.azurehdinsight.net/templeton/v1/hive | jq -r .id)
+ echo $JOB_ID
``` ```powershell
$clusterName
1. To check the status of the job, use the following command: ```bash
- curl -u admin:$password -d user.name=admin -G https://$clusterName.azurehdinsight.net/templeton/v1/jobs/$jobid | jq .status.state
+ curl -u admin:$PASSWORD -d user.name=admin -G https://$CLUSTER_NAME.azurehdinsight.net/templeton/v1/jobs/$jobid | jq .status.state
``` ```powershell
hdinsight Apache Hadoop Use Sqoop Mac Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md
Learn how to use Apache Sqoop to import and export between an Apache Hadoop clus
1. For ease of use, set variables. Replace `PASSWORD`, `MYSQLSERVER`, and `MYDATABASE` with the relevant values, and then enter the commands below: ```bash
- export password='PASSWORD'
- export sqlserver="MYSQLSERVER"
- export database="MYDATABASE"
+ export PASSWORD='PASSWORD'
+ export SQL_SERVER="MYSQLSERVER"
+ export DATABASE="MYDATABASE"
- export serverConnect="jdbc:sqlserver://$sqlserver.database.windows.net:1433;user=sqluser;password=$password"
- export serverDbConnect="jdbc:sqlserver://$sqlserver.database.windows.net:1433;user=sqluser;password=$password;database=$database"
+ export SERVER_CONNECT="jdbc:sqlserver://$SQL_SERVER.database.windows.net:1433;user=sqluser;password=$PASSWORD"
+ export SERVER_DB_CONNECT="jdbc:sqlserver://$SQL_SERVER.database.windows.net:1433;user=sqluser;password=$PASSWORD;database=$DABATASE"
``` ## Sqoop export
From Hive to SQL.
1. To verify that Sqoop can see your database, enter the command below in your open SSH connection. This command returns a list of databases. ```bash
- sqoop list-databases --connect $serverConnect
+ sqoop list-databases --connect $SERVER_CONNECT
``` 1. Enter the following command to see a list of tables for the specified database: ```bash
- sqoop list-tables --connect $serverDbConnect
+ sqoop list-tables --connect $SERVER_DB_CONNECT
``` 1. To export data from the Hive `hivesampletable` table to the `mobiledata` table in your database, enter the command below in your open SSH connection: ```bash
- sqoop export --connect $serverDbConnect \
+ sqoop export --connect $SERVER_DB_CONNECT \
-table mobiledata \ --hcatalog-table hivesampletable ```
From Hive to SQL.
1. To verify that data was exported, use the following queries from your SSH connection to view the exported data: ```bash
- sqoop eval --connect $serverDbConnect \
+ sqoop eval --connect $SERVER_DB_CONNECT \
--query "SELECT COUNT(*) from dbo.mobiledata WITH (NOLOCK)"
- sqoop eval --connect $serverDbConnect \
+ sqoop eval --connect $SERVER_DB_CONNECT \
--query "SELECT TOP(10) * from dbo.mobiledata WITH (NOLOCK)" ```
From SQL to Azure storage.
1. Enter the command below in your open SSH connection to import data from the `mobiledata` table in SQL, to the `wasbs:///tutorials/usesqoop/importeddata` directory on HDInsight. The fields in the data are separated by a tab character, and the lines are terminated by a new-line character. ```bash
- sqoop import --connect $serverDbConnect \
+ sqoop import --connect $SERVER_DB_CONNECT \
--table mobiledata \ --target-dir 'wasb:///tutorials/usesqoop/importeddata' \ --fields-terminated-by '\t' \
From SQL to Azure storage.
1. Alternatively, you can also specify a Hive table: ```bash
- sqoop import --connect $serverDbConnect \
+ sqoop import --connect $SERVER_DB_CONNECT \
--table mobiledata \ --target-dir 'wasb:///tutorials/usesqoop/importeddata2' \ --fields-terminated-by '\t' \
hdinsight Apache Hbase Tutorial Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md
The HBase REST API is secured via [basic authentication](https://en.wikipedia.or
1. Set environment variable for ease of use. Edit the commands below by replacing `MYPASSWORD` with the cluster login password. Replace `MYCLUSTERNAME` with the name of your HBase cluster. Then enter the commands. ```bash
- export password='MYPASSWORD'
- export clustername=MYCLUSTERNAME
+ export PASSWORD='MYPASSWORD'
+ export CLUSTER_NAME=MYCLUSTERNAME
``` 1. Use the following command to list the existing HBase tables: ```bash
- curl -u admin:$password \
- -G https://$clustername.azurehdinsight.net/hbaserest/
+ curl -u admin:$PASSWORD \
+ -G https://$CLUSTER_NAME.azurehdinsight.net/hbaserest/
``` 1. Use the following command to create a new HBase table with two-column families: ```bash
- curl -u admin:$password \
- -X PUT "https://$clustername.azurehdinsight.net/hbaserest/Contacts1/schema" \
+ curl -u admin:$PASSWORD \
+ -X PUT "https://$CLUSTER_NAME.azurehdinsight.net/hbaserest/Contacts1/schema" \
-H "Accept: application/json" \ -H "Content-Type: application/json" \ -d "{\"@name\":\"Contact1\",\"ColumnSchema\":[{\"name\":\"Personal\"},{\"name\":\"Office\"}]}" \
The HBase REST API is secured via [basic authentication](https://en.wikipedia.or
1. Use the following command to insert some data: ```bash
- curl -u admin:$password \
- -X PUT "https://$clustername.azurehdinsight.net/hbaserest/Contacts1/false-row-key" \
+ curl -u admin:$PASSWORD \
+ -X PUT "https://$CLUSTER_NAME.azurehdinsight.net/hbaserest/Contacts1/false-row-key" \
-H "Accept: application/json" \ -H "Content-Type: application/json" \ -d "{\"Row\":[{\"key\":\"MTAwMA==\",\"Cell\": [{\"column\":\"UGVyc29uYWw6TmFtZQ==\", \"$\":\"Sm9obiBEb2xl\"}]}]}" \
The HBase REST API is secured via [basic authentication](https://en.wikipedia.or
1. Use the following command to get a row: ```bash
- curl -u admin:$password \
- GET "https://$clustername.azurehdinsight.net/hbaserest/Contacts1/1000" \
+ curl -u admin:$PASSWORD \
+ GET "https://$CLUSTER_NAME.azurehdinsight.net/hbaserest/Contacts1/1000" \
-H "Accept: application/json" \ -v ```
hdinsight Hdinsight Authorize Users To Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-authorize-users-to-ambari.md
Write-Output $zookeeperHosts
Edit the variables below by replacing `CLUSTERNAME`, `ADMINPASSWORD`, `NEWUSER`, and `USERPASSWORD` with the appropriate values. The script is designed to be executed with bash. Slight modifications would be needed for a Windows command prompt. ```bash
-export clusterName="CLUSTERNAME"
-export adminPassword='ADMINPASSWORD'
-export user="NEWUSER"
-export userPassword='USERPASSWORD'
+export CLUSTER_NAME="CLUSTERNAME"
+export ADMIN_PASSWORD='ADMINPASSWORD'
+export USER="NEWUSER"
+export USER_PASSWORD='USERPASSWORD'
# create user
-curl -k -u admin:$adminPassword -H "X-Requested-By: ambari" -X POST \
--d "{\"Users/user_name\":\"$user\",\"Users/password\":\"$userPassword\",\"Users/active\":\"true\",\"Users/admin\":\"false\"}" \
-https://$clusterName.azurehdinsight.net/api/v1/users
+curl -k -u admin:$ADMIN_PASSWORD -H "X-Requested-By: ambari" -X POST \
+-d "{\"Users/user_name\":\"$USER\",\"Users/password\":\"$USER_PASSWORD\",\"Users/active\":\"true\",\"Users/admin\":\"false\"}" \
+https://$CLUSTER_NAME.azurehdinsight.net/api/v1/users
-echo "user created: $user"
+echo "user created: $USER"
# grant permissions
-curl -k -u admin:$adminPassword -H "X-Requested-By: ambari" -X POST \
--d '[{"PrivilegeInfo":{"permission_name":"CLUSTER.USER","principal_name":"'$user'","principal_type":"USER"}}]' \
-https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/privileges
+curl -k -u admin:$ADMIN_PASSWORD -H "X-Requested-By: ambari" -X POST \
+-d '[{"PrivilegeInfo":{"permission_name":"CLUSTER.USER","principal_name":"'$USER'","principal_type":"USER"}}]' \
+https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/privileges
echo "Privilege is granted"
echo "Pausing for 100 seconds"
sleep 10s # perform query using new user account
-curl -k -u $user:$userPassword -H "X-Requested-By: ambari" \
--X GET "https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/ZOOKEEPER/components/ZOOKEEPER_SERVER"
+curl -k -u $USER:$USER_PASSWORD -H "X-Requested-By: ambari" \
+-X GET "https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER"
``` ## Grant permissions to Apache Hive views
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
description: Learn how to use Azure Monitor logs to monitor jobs running in an H
Previously updated : 09/02/2022 Last updated : 04/14/2023 # Use Azure Monitor logs to monitor HDInsight clusters
Available HDInsight workbooks:
Screenshot of Spark Workbook :::image type="content" source="./media/hdinsight-hadoop-oms-log-analytics-tutorial/hdinsight-spark-workbook.png" alt-text="Spark workbook screenshot":::
-## Use at-scale Insights to monitor multiple clusters
-
-You can log into Azure portal and select Monitoring. In the **Insights** section, you can select **Insights Hub**. Then you can find HDInsight clusters.
-
-In this view, you can monitor multiple HDInsight clusters in one place.
- :::image type="content" source="./media/hdinsight-hadoop-oms-log-analytics-tutorial/hdinsight-monitor-insights.png" alt-text="Cluster monitor insights screenshot":::
-
-You can select the subscription and the HDInsight clusters you want to monitor.
-
-You can see the detail cluster list in each section.
-
-In the **Overview** tab under **Monitored Clusters**, you can see cluster type, critical Alerts, and resource utilizations.
- :::image type="content" source="./media/hdinsight-hadoop-oms-log-analytics-tutorial/hdinsight-cluster-alerts.png" alt-text="Cluster monitor alerts screenshot":::
-
-Also you can see the clusters in each workload type, including Spark, HBase, Hive, and Kafka.
-
-The high-level metrics of each workload type will be presented, including how many active node managers, how many running applications, etc.
- ## Configuring performance counters
HDInsight support cluster auditing with Azure Monitor logs, by importing the fol
* `log_ambari_audit_CL` - this table provides audit logs from Ambari. * `log_ranger_audti_CL` - this table provides audit logs from Apache Ranger on ESP clusters. - #### [Classic Azure Monitor experience](#tab/previous) ## Prerequisites
For management solution instructions, see [Management solutions in Azure](/previ
Because the cluster is a brand new cluster, the report doesn't show any activities.
-## Configuring performance counters
-
-Azure monitor supports collecting and analyzing performance metrics for the nodes in your cluster. For more information, see [Linux performance data sources in Azure Monitor](../azure-monitor/agents/data-sources-performance-counters.md#linux-performance-counters).
- ## Cluster auditing HDInsight support cluster auditing with Azure Monitor logs, by importing the following types of logs:
hdinsight Hdinsight Sales Insights Etl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sales-insights-etl.md
If you don't have an Azure subscription, create a [free account](https://azure.m
1. Set variable for resource group. Replace `RESOURCE_GROUP_NAME` with the name of an existing or new resource group, then enter the command: ```bash
- resourceGroup="RESOURCE_GROUP_NAME"
+ RESOURCE_GROUP="RESOURCE_GROUP_NAME"
``` 1. Execute the script. Replace `LOCATION` with a desired value, then enter the command: ```bash
- ./scripts/resources.sh $resourceGroup LOCATION
+ ./scripts/resources.sh $RESOURCE_GROUP LOCATION
``` If you're not sure which region to specify, you can retrieve a list of supported regions for your subscription with the [az account list-locations](/cli/azure/account#az-account-list-locations) command.
The default password for SSH access to the clusters is `Thisisapassword1`. If yo
1. To view the names of the clusters, enter the following command: ```bash
- sparkClusterName=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.sparkClusterName.value')
- llapClusterName=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.llapClusterName.value')
+ SPARK_CLUSTER_NAME=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.sparkClusterName.value')
+ LLAP_CLUSTER_NAME=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.llapClusterName.value')
- echo "Spark Cluster" $sparkClusterName
- echo "LLAP cluster" $llapClusterName
+ echo "Spark Cluster" $SPARK_CLUSTER_NAME
+ echo "LLAP cluster" $LLAP_CLUSTER_NAME
``` 1. To view the Azure storage account and access key, enter the following command: ```azurecli
- blobStorageName=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.blobStorageName.value')
+ BLOB_STORAGE_NAME=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.blobStorageName.value')
blobKey=$(az storage account keys list \
- --account-name $blobStorageName \
- --resource-group $resourceGroup \
+ --account-name $BLOB_STORAGE_NAME \
+ --resource-group $RESOURCE_GROUP \
--query [0].value -o tsv)
- echo $blobStorageName
- echo $blobKey
+ echo $BLOB_STORAGE_NAME
+ echo $BLOB_KEY
``` 1. To view the Data Lake Storage Gen2 account and access key, enter the following command: ```azurecli
- ADLSGen2StorageName=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.adlsGen2StorageName.value')
+ ADLSGEN2STORAGENAME=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.adlsGen2StorageName.value')
- adlsKey=$(az storage account keys list \
- --account-name $ADLSGen2StorageName \
- --resource-group $resourceGroup \
+ ADLSKEY=$(az storage account keys list \
+ --account-name $ADLSGEN2STORAGENAME \
+ --resource-group $RESOURCE_GROUP \
--query [0].value -o tsv)
- echo $ADLSGen2StorageName
- echo $adlsKey
+ echo $ADLSGEN2STORAGENAME
+ echo $ADLSKEY
``` ### Create a data factory
This data factory will have one pipeline with two activities:
To set up your Azure Data Factory pipeline, execute the command below. You should still be at the `hdinsight-sales-insights-etl` directory. ```bash
-blobStorageName=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.blobStorageName.value')
-ADLSGen2StorageName=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.adlsGen2StorageName.value')
+BLOB_STORAGE_NAME=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.blobStorageName.value')
+ADLSGEN2STORAGENAME=$(cat resourcesoutputs_storage.json | jq -r '.properties.outputs.adlsGen2StorageName.value')
-./scripts/adf.sh $resourceGroup $ADLSGen2StorageName $blobStorageName
+./scripts/adf.sh $RESOURCE_GROUP $ADLSGEN2STORAGENAME $BLOB_STORAGE_NAME
``` This script does the following things:
For other ways to transform data by using HDInsight, see [this article on using
1. Copy the `query.hql` file to the LLAP cluster by using SCP. Enter the command: ```bash
- llapClusterName=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.llapClusterName.value')
- scp scripts/query.hql sshuser@$llapClusterName-ssh.azurehdinsight.net:/home/sshuser/
+ LLAP_CLUSTER_NAME=$(cat resourcesoutputs_remainder.json | jq -r '.properties.outputs.llapClusterName.value')
+ scp scripts/query.hql sshuser@$LLAP_CLUSTER_NAME-ssh.azurehdinsight.net:/home/sshuser/
``` Reminder: The default password is `Thisisapassword1`.
For other ways to transform data by using HDInsight, see [this article on using
1. Use SSH to access the LLAP cluster. Enter the command: ```bash
- ssh sshuser@$llapClusterName-ssh.azurehdinsight.net
+ ssh sshuser@$LLAP_CLUSTER_NAME-ssh.azurehdinsight.net
``` 1. Use the following command to run the script:
If you're not going to continue to use this application, delete all resources by
1. To remove the resource group, enter the command: ```azurecli
- az group delete -n $resourceGroup
+ az group delete -n $RESOURCE_GROUP
``` 1. To remove the service principal, enter the commands: ```azurecli
- servicePrincipal=$(cat serviceprincipal.json | jq -r '.name')
- az ad sp delete --id $servicePrincipal
+ SERVICE_PRINCIPAL=$(cat serviceprincipal.json | jq -r '.name')
+ az ad sp delete --id $SERVICE_PRINCIPAL
``` ## Next steps
hdinsight Apache Kafka Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-get-started.md
In this section, you get the host information from the Apache Ambari REST API on
1. Set up password variable. Replace `PASSWORD` with the cluster login password, then enter the command: ```bash
- export password='PASSWORD'
+ export PASSWORD='PASSWORD'
``` 1. Extract the correctly cased cluster name. The actual casing of the cluster name may be different than you expect, depending on how the cluster was created. This command will obtain the actual casing, and then store it in a variable. Enter the following command: ```bash
- export clusterName=$(curl -u admin:$password -sS -G "http://headnodehost:8080/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name')
+ export CLUSTER_NAME=$(curl -u admin:$PASSWORD -sS -G "http://headnodehost:8080/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name')
``` > [!Note]
In this section, you get the host information from the Apache Ambari REST API on
1. To set an environment variable with Zookeeper host information, use the command below. The command retrieves all Zookeeper hosts, then returns only the first two entries. This is because you want some redundancy in case one host is unreachable. ```bash
- export KAFKAZKHOSTS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2);
+ export KAFKAZKHOSTS=$(curl -sS -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2);
``` > [!Note]
In this section, you get the host information from the Apache Ambari REST API on
1. To set an environment variable with Apache Kafka broker host information, use the following command: ```bash
- export KAFKABROKERS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
+ export KAFKABROKERS=$(curl -sS -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
``` > [!Note]
hdinsight Apache Kafka Producer Consumer Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md
If you would like to skip this step, prebuilt jars can be downloaded from the `P
```bash sudo apt -y install jq
- export clusterName='<clustername>'
- export password='<password>'
- export KAFKABROKERS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
+ export CLUSTER_NAME='<clustername>'
+ export PASSWORD='<password>'
+ export KAFKABROKERS=$(curl -sS -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
``` > [!Note]
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
Previously updated : 04/07/2023 Last updated : 04/14/2023
For enhanced workflows and ease of use, you can use the MedTech service to recei
:::image type="content" source="media\device-messages-through-iot-hub\data-flow-diagram.png" border="false" alt-text="Diagram of the IoT device message flow through an IoT hub and event hub, and then into the MedTech service." lightbox="media\device-messages-through-iot-hub\data-flow-diagram.png"::: > [!TIP]
-> To learn how the MedTech service transforms and persists device message data into the FHIR service as FHIR Observations, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+> To learn how the MedTech service transforms and persists device message data into the FHIR service as FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
In this tutorial, you learn how to:
healthcare-apis Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md
Previously updated : 04/04/2023 Last updated : 04/14/2023
The MedTech service is available in these Azure regions: [Products available by
### Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service?
-No. The MedTech service currently only supports the Azure Health Data Services FHIR service for the persistence of transformed device message data. The open-source version of the MedTech service supports the use of different FHIR services.
+No. The MedTech service currently only supports the Azure Health Data Services FHIR service for the persistence of transformed device data. The open-source version of the MedTech service supports the use of different FHIR services.
To learn more about the MedTech service open-source projects, see [Open-source projects](git-projects.md).
The MedTech service supports the [HL7 FHIR&#174; R4](https://www.hl7.org/impleme
### Why do I have to provide device and FHIR destination mappings to the MedTech service?
-The MedTech service requires device and FHIR destination mappings to perform normalization and transformation processes on device message data. To learn how the MedTech service transforms device message data into [FHIR Observations](https://www.hl7.org/fhir/observation.html), see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+The MedTech service requires device and FHIR destination mappings to perform normalization and transformation processes on device message data. To learn how the MedTech service transforms device data into [FHIR Observations](https://www.hl7.org/fhir/observation.html), see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
### Is JsonPathContent still supported by the MedTech service device mapping? Yes. JsonPathContent can be used as a template type within [CollectionContent](overview-of-device-mapping.md#collectioncontent). It's recommended that [CalculatedContent](how-to-use-calculatedcontent-mappings.md) is used as it supports all of the features of JsonPathContent with extra support for more advanced features.
-### How long does it take for device message data to show up in the FHIR service?
+### How long does it take for device data to show up in the FHIR service?
-The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observation.html) created during the transformation stage and provides near real-time processing. However, this buffer can potentially delay the persistence of FHIR Observations to the FHIR service up to ~five minutes. To learn how the MedTech service transforms device message data into FHIR Observations, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observation.html) created during the transformation stage and provides near real-time processing. However, this buffer can potentially delay the persistence of FHIR Observations to the FHIR service up to ~five minutes. To learn how the MedTech service transforms device data into FHIR Observations, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
### Why are the device messages added to the event hub not showing up as FHIR Observations in the FHIR service?
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
For more information about assigning roles to the FHIR services, see [Configure
For more information about application roles, see [Authentication and Authorization for Azure Health Data Services](../authentication-authorization.md).
-## Step 5: Send the data for processing
+## Step 5: Send the device data for processing
-When the MedTech service is deployed and connected to the Event Hubs and FHIR services, it's ready to process data from a device and translate it into a FHIR service Observation. There are three parts of the sending process.
+When the MedTech service is deployed and connected to the Event Hubs and FHIR services, it's ready to process device data and transform it into FHIR Observations. There are three parts of the sending process.
-### Data sent from Device to Event Hubs
+### Device data sent to Event Hubs
-The data is sent to an Event Hubs instance so that it can wait until the MedTech service is ready to receive it. The data transfer needs to be asynchronous because it's sent over the Internet and delivery times can't be precisely measured. Normally the data won't sit on an event hub longer than 24 hours.
+The device data is sent to an Event Hubs instance so that it can wait until the MedTech service is ready to receive it. The device data transfer needs to be asynchronous because it's sent over the Internet and delivery times can't be precisely measured. Normally the data won't sit on an event hub longer than 24 hours.
For more information about Event Hubs, see [Event Hubs](../../event-hubs/event-hubs-about.md). For more information on Event Hubs data retention, see [Event Hubs quotas](../../event-hubs/event-hubs-quotas.md)
-### Data Sent from Event Hubs to the MedTech service
+### Device data sent from Event Hubs to the MedTech service
-MedTech requests the data from the Event Hubs instance and the data is sent from the event hub to the MedTech service. This procedure is called ingestion.
+MedTech requests the device data from the Event Hubs instance and the device data is sent from the event hub to the MedTech service. This procedure is called ingestion.
-### The MedTech service processes the data
+### The MedTech service processes the device data
-The MedTech service processes the data in five steps:
+The MedTech service processes the device data in five steps:
- Ingest - Normalize
The MedTech service processes the data in five steps:
If the processing was successful and you didn't get any error messages, your device data is now a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource.
-For more information on the MedTech service device message data transformation, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+For more information on the MedTech service device data transformation, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
-## Step 6: Verify the processed data
+## Step 6: Verify the processed device data
-You can verify that the data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the data isn't mapped or if the mapping isn't authored properly, the data will be skipped. If there are any problems, check the [device mapping](overview-of-device-mapping.md) or the [FHIR destination mapping](how-to-configure-fhir-mappings.md).
+You can verify that the device data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the device data isn't mapped or if the mapping isn't authored properly, the device data will be skipped. If there are any problems, check the [device mapping](overview-of-device-mapping.md) or the [FHIR destination mapping](how-to-configure-fhir-mappings.md).
### Metrics
-You can verify that the data is correctly persisted into the FHIR service by using the [MedTech service metrics](how-to-configure-metrics.md) in the Azure portal.
+You can verify that the device data is correctly persisted in the FHIR service by using the [MedTech service metrics](how-to-configure-metrics.md) in the Azure portal.
## Next steps
healthcare-apis How To Configure Fhir Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-fhir-mappings.md
- Title: How to configure FHIR destination mappings in the MedTech service - Azure Health Data Services
-description: This article describes how to configure FHIR destination mappings in Azure Health Data Services MedTech service.
---- Previously updated : 1/12/2023---
-# How to configure FHIR destination mappings
-
-This article describes how to configure the MedTech service using the Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings.
-
-Below is a conceptual example of what happens during the normalization and transformation process within the MedTech service:
--
-## FHIR destination mappings
-
-Once the device content is extracted into a normalized model, the data is collected and grouped according to device identifier, measurement type, and time period. The output of this grouping is sent for conversion into a FHIR resource ([Observation](https://www.hl7.org/fhir/observation.html) currently). The FHIR destination mapping template controls how the data is mapped into a FHIR observation. Should an observation be created for a point in time or over a period of an hour? What codes should be added to the observation? Should the value be represented as [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) or a [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity)? These data types are all options the FHIR destination mappings
-configuration controls.
-
-> [!NOTE]
-> Mappings are stored in an underlying blob storage and loaded from blob per compute execution. Once updated they should take effect immediately.
-
-## FHIR destination mappings validations
-
-The validation process validates the FHIR destination mappings before allowing them to be saved for use. These elements are required in the FHIR destination mappings templates.
-
-**FHIR destination mappings**
-
-|Element|Required|
-|:|:-|
-|TypeName|True|
-
-> [!NOTE]
-> This is the only required FHIR destination mapping element validated at this time.
-
-### CodeValueFhirTemplate
-
-The CodeValueFhirTemplate is currently the only template supported in FHIR destination mapping at this time. It allows you to define codes, the effective period, and the value of the observation. Multiple value types are supported: [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept), and [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity). Along with these configurable values, the identifier for the Observation resource and linking to the proper Device and Patient resources are handled automatically.
-
-| Property | Description
-| |
-|**TypeName**| The type of measurement this template should bind to. There should be at least one Device mapping template that outputs this type.
-|**PeriodInterval**|The period of time the observation created should represent. Supported values are 0 (an instance), 60 (an hour), 1440 (a day).
-|**Category**|Any number of [CodeableConcepts](http://hl7.org/fhir/datatypes-definitions.html#codeableconcept) to classify the type of observation created.
-|**Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created.
-|**Codes[].Code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
-|**Codes[].System**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
-|**Codes[].Display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
-|**Value**|The value to extract and represent in the observation. For more information, see [Value Type Templates](#value-type-templates).
-|**Components**|*Optional:* One or more components to create on the observation.
-|**Components[].Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the component.
-|**Components[].Value**|The value to extract and represent in the component. For more information, see [Value Type Templates](#value-type-templates).
-
-### Value type templates
-
-Below are the currently supported value type templates:
-
-#### SampledData
-
-Represents the [SampledData](http://hl7.org/fhir/datatypes.html#SampledData) FHIR data type. Observation measurements are written to a value stream starting at a point in time and incrementing forward using the period defined. If no value is present, an `E` will be written into the data stream. If the period is such that two more values occupy the same position in the data stream, the latest value is used. The same logic is applied when an observation using the SampledData is updated.
-
-| Property | Description
-| |
-|**DefaultPeriod**|The default period in milliseconds to use.
-|**Unit**|The unit to set on the origin of the SampledData.
-
-#### Quantity
-
-Represents the [Quantity](http://hl7.org/fhir/datatypes.html#Quantity) FHIR data type. If more than one value is present in the grouping, only the first value is used. When new value arrives that maps to the same observation it will overwrite the old value.
-
-| Property | Description
-| |
-|**Unit**| Unit representation.
-|**Code**| Coded form of the unit.
-|**System**| System that defines the coded unit form.
-
-### CodeableConcept
-
-Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConcept) FHIR data type. The actual value isn't used.
-
-| Property | Description
-| |
-|**Text**|Plain text representation.
-|**Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created.
-|**Codes[].Code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
-|**Codes[].System**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
-|**Codes[].Display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
-
-### Examples
-
-**Heart rate - SampledData**
-
-```json
-{
- "templateType": "CodeValueFhir",
- "template": {
- "codes": [
- {
- "code": "8867-4",
- "system": "http://loinc.org",
- "display": "Heart rate"
- }
- ],
- "periodInterval": 60,
- "typeName": "heartrate",
- "value": {
- "defaultPeriod": 5000,
- "unit": "count/min",
- "valueName": "hr",
- "valueType": "SampledData"
- }
- }
-}
-```
-
-**Steps - SampledData**
-
-```json
-{
- "templateType": "CodeValueFhir",
- "template": {
- "codes": [
- {
- "code": "55423-8",
- "system": "http://loinc.org",
- "display": "Number of steps"
- }
- ],
- "periodInterval": 60,
- "typeName": "stepsCount",
- "value": {
- "defaultPeriod": 5000,
- "unit": "",
- "valueName": "steps",
- "valueType": "SampledData"
- }
- }
-}
-```
-
-**Blood pressure - SampledData**
-
-```json
-{
- "templateType": "CodeValueFhir",
- "template": {
- "codes": [
- {
- "code": "85354-9",
- "display": "Blood pressure panel with all children optional",
- "system": "http://loinc.org"
- }
- ],
- "periodInterval": 60,
- "typeName": "bloodpressure",
- "components": [
- {
- "codes": [
- {
- "code": "8867-4",
- "display": "Diastolic blood pressure",
- "system": "http://loinc.org"
- }
- ],
- "value": {
- "defaultPeriod": 5000,
- "unit": "mmHg",
- "valueName": "diastolic",
- "valueType": "SampledData"
- }
- },
- {
- "codes": [
- {
- "code": "8480-6",
- "display": "Systolic blood pressure",
- "system": "http://loinc.org"
- }
- ],
- "value": {
- "defaultPeriod": 5000,
- "unit": "mmHg",
- "valueName": "systolic",
- "valueType": "SampledData"
- }
- }
- ]
- }
-}
-```
-
-**Blood pressure - Quantity**
-
-```json
-{
- "templateType": "CodeValueFhir",
- "template": {
- "codes": [
- {
- "code": "85354-9",
- "display": "Blood pressure panel with all children optional",
- "system": "http://loinc.org"
- }
- ],
- "periodInterval": 0,
- "typeName": "bloodpressure",
- "components": [
- {
- "codes": [
- {
- "code": "8867-4",
- "display": "Diastolic blood pressure",
- "system": "http://loinc.org"
- }
- ],
- "value": {
- "unit": "mmHg",
- "valueName": "diastolic",
- "valueType": "Quantity"
- }
- },
- {
- "codes": [
- {
- "code": "8480-6",
- "display": "Systolic blood pressure",
- "system": "http://loinc.org"
- }
- ],
- "value": {
- "unit": "mmHg",
- "valueName": "systolic",
- "valueType": "Quantity"
- }
- }
- ]
- }
-}
-```
-
-**Device removed - CodeableConcept**
-
-```json
-{
- "templateType": "CodeValueFhir",
- "template": {
- "codes": [
- {
- "code": "deviceEvent",
- "system": "https://www.mydevice.com/v1",
- "display": "Device Event"
- }
- ],
- "periodInterval": 0,
- "typeName": "deviceRemoved",
- "value": {
- "text": "Device Removed",
- "codes": [
- {
- "code": "deviceRemoved",
- "system": "https://www.mydevice.com/v1",
- "display": "Device Removed"
- }
- ],
- "valueName": "deviceRemoved",
- "valueType": "CodeableConcept"
- }
- }
-}
-```
-
-> [!TIP]
-> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing common MedTech service errors.
-
-## Next steps
-
-In this article, you learned how to configure FHIR destination mappings.
-
-To learn about how to configure device mappings, see
-
-> [!div class="nextstepaction"]
-> [How to configure device mappings](how-to-configure-device-mappings.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
Previously updated : 04/04/2023 Last updated : 04/14/2023
Metric category|Metric name|Metric description|
|--|--|--| |Availability|IotConnector Health Status|The overall health of the MedTech service.| |Errors|Total Error Count|The total number of errors.|
-|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](overview-of-device-message-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.|
-|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](overview-of-device-message-processing-stages.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-message-processing-stages.md#persist) by the MedTech service.|
-|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](overview-of-device-message-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.|
-|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-message-processing-stages.md#transform) of the MedTech service.|
+|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](overview-of-device-data-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](overview-of-device-data-processing-stages.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.|
+|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](overview-of-device-data-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-data-processing-stages.md#transform) of the MedTech service.|
|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.| |Traffic|Number of Normalized Messages|The number of normalized messages.|
healthcare-apis How To Use Calculatedcontent Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-mappings.md
Previously updated : 02/09/2023 Last updated : 04/14/2023 # How to use CalculatedContent mappings
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+ This article describes how to use CalculatedContent mappings with MedTech service device mappings in Azure Health Data Services. ## Overview of CalculatedContent mappings
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Previously updated : 1/18/2023 Last updated : 04/14/2023 # How to use custom functions with device mappings
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+ Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-configure-device-mappings.md) during the device message [normalization](understand-service.md#normalize) process. > [!TIP]
healthcare-apis How To Use Iotjsonpathcontent Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontent-mappings.md
+
+ Title: How to use IotJsonPathContent mappings in the MedTech service device mappings - Azure Health Data Services
+description: This article describes how to use IotJsonPathContent mappings with the MedTech service device mappings.
++++ Last updated : 04/13/2023+++
+# How to use IotJsonPathContent mappings
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+This article describes how to use IoTJsonPathContent mappings with the MedTech service [device mappings](overview-of-device-mapping.md).
+
+## IotJsonPathContent
+
+The IotJsonPathContent is similar to the JsonPathContent except the `DeviceIdExpression` and `TimestampExpression` aren't required.
+
+The assumption, when using this template, is the device messages being evaluated were sent using the [Azure IoT Hub Device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) or [Export Data (legacy)](../../iot-central/core/howto-export-data-legacy.md) feature of [Azure IoT Central](../../iot-central/core/overview-iot-central.md).
+
+When you're using these SDKs, the device identity and the timestamp of the message are known.
+
+> [!IMPORTANT]
+> Make sure that you're using a device identifier from Azure Iot Hub or Azure IoT Central that is registered as an identifier for a device resource on the destination FHIR service.
+
+If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContentTemplate, assuming that you're using custom properties in the message body for the device identity or measurement timestamp.
+
+> [!NOTE]
+> When using `IotJsonPathContent`, the `TypeMatchExpression` should resolve to the entire message as a JToken. For more information, see the following examples:
+
+### Examples
+
+With each of these examples, you're provided with:
+ * A valid device message.
+ * An example of what the device message will look like after IoT hub receiving and processing.
+ * Conforming and valid MedTech service device mappings for normalizing the device message after IoT hub processing.
+ * An example of what the MedTech service device message will look like after normalization.
+
+> [!IMPORTANT]
+> To avoid device spoofing in device-to-cloud messages, Azure IoT Hub enriches all messages with additional properties. To learn more about these properties, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties).
+
+> [!TIP]
+> [Visual Studio Code with the Azure IoT Hub extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) is a recommended method for sending IoT device messages to your IoT Hub for testing and troubleshooting.
+
+**Heart rate**
+
+**A valid device message to send to your IoT hub.**
+
+```json
+
+{ΓÇ£heartRateΓÇ¥ : ΓÇ£78ΓÇ¥}
+
+```
+
+**An example of what the device message will look like after being received and processed by the IoT hub.**
+
+> [!NOTE]
+> The IoT Hub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`.
+>
+> `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-05-new-config.md#destination-properties).
+
+```json
+
+{
+ "Body": {
+ "heartRate": "78"
+ },
+ "Properties": {
+ "iothub-creation-time-utc" : "2021-02-01T22:46:01.8750000Z"
+ },
+ "SystemProperties": {
+ "iothub-connection-device-id" : "device123"
+ }
+}
+
+```
+
+**Conforming and valid MedTech service device mappings for normalizing device data after IoT Hub processing.**
+
+```json
+
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "IotJsonPathContentTemplate",
+ "template": {
+ "typeName": "heartRate",
+ "typeMatchExpression": "$..[?(@Body.heartRate)]",
+ "patientIdExpression": "$.SystemProperties.iothub-connection-device-id",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.Body.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
+
+**An example of what the MedTech service device data will look like after the normalization process.**
+
+```json
+
+{
+ "type": "heartRate",
+ "occurrenceTimeUtc": "2021-02-01T22:46:01.875Z",
+ "deviceId": "device123",
+ "properties": [
+ {
+ "name": "hr",
+ "value": "78"
+ }
+ ]
+}
+
+```
+
+**Blood pressure**
+
+**A valid IoT device message to send to your IoT hub.**
+
+```json
+
+{
+ "systolic": "123",
+ "diastolic": "87"
+}
+
+```
+
+**An example of what the device message will look like after being received and processed by the IoT hub.**
+
+> [!NOTE]
+> The IoT hyub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`.
+>
+> `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-05-new-config.md#destination-properties).
+
+```json
+
+{
+ "Body": {
+ "systolic": "123",
+ "diastolic" : "87"
+ },
+ "Properties": {
+ "iothub-creation-time-utc" : "2021-02-01T22:46:01.8750000Z"
+ },
+ "SystemProperties": {
+ "iothub-connection-device-id" : "device123"
+ }
+}
+
+```
+
+**Conforming and valid MedTech service device mappings for normalizing the device data after IoT hub processing.**
+
+```json
+
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "IotJsonPathContentTemplate",
+ "template": {
+ "typeName": "bloodpressure",
+ "typeMatchExpression": "$..[?(@Body.systolic && @Body.diastolic)]",
+ "patientIdExpression": "$.SystemProperties.iothub-connection-device-id",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.Body.systolic",
+ "valueName": "systolic"
+ },
+ {
+ "required": "true",
+ "valueExpression": "$.Body.diastolic",
+ "valueName": "diastolic"
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
+
+**An example of what the MedTech service device data will look like after the normalization process.**
+
+```json
+
+{
+ "type": "bloodpressure",
+ "occurrenceTimeUtc": "2021-02-01T22:46:01.875Z",
+ "deviceId": "device123",
+ "properties": [
+ {
+ "name": "systolic",
+ "value": "123"
+ },
+ {
+ "name": "diastolic",
+ "value": "87"
+ }
+ ]
+}
+
+```
+
+> [!TIP]
+> The IotJsonPathContent device mapping examples provided in this article may be combined into a single MedTech service device mappings as shown.
+>
+> Additionally, the IotJasonPathContent can also be combined with with other template types such as [JsonPathContent mappings](how-to-use-jsonpath-content-mappings.md) to further expand your MedTech service device mapping.
+
+**Combined heart rate and blood pressure MedTech service device mapping example.**
+
+```json
+
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "IotJsonPathContent",
+ "template": {
+ "typeName": "heartRate",
+ "typeMatchExpression": "$..[?(@Body.heartRate)]",
+ "patientIdExpression": "$.SystemProperties.iothub-connection-device-id",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.Body.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ },
+ {
+ "templateType": "IotJsonPathContent",
+ "template": {
+ "typeName": "bloodpressure",
+ "typeMatchExpression": "$..[?(@Body.systolic && @Body.diastolic)]",
+ "patientIdExpression": "$.SystemProperties.iothub-connection-device-id",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.Body.systolic",
+ "valueName": "systolic"
+ },
+ {
+ "required": "true",
+ "valueExpression": "$.Body.diastolic",
+ "valueName": "diastolic"
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
+
+> [!TIP]
+> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing MedTech service errors.
+
+## Next steps
+
+In this article, you learned how to use IotJsonPathContent mappings with the MedTech service device mapping.
+
+To learn how to configure the MedTech service FHIR destination mapping, see
+
+> [!div class="nextstepaction"]
+> [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Mapping Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md
Previously updated : 04/10/2023 Last updated : 04/14/2023
In this article, learn how to use the MedTech service Mapping debugger. The Mapping debugger is a self-service tool that is used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations. > [!TIP]
-> To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device data processing stages](overview-of-device-message-processing-stages.md).
+> To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
The following video presents an overview of the Mapping debugger: >
healthcare-apis How To Use Monitoring And Health Checks Tabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-and-health-checks-tabs.md
Previously updated : 04/04/2023 Last updated : 04/14/2023
Metric category|Metric name|Metric description|
|--|--|--| |Availability|IotConnector Health Status|The overall health of the MedTech service.| |Errors|**Total Error Count**|The total number of errors.|
-|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](overview-of-device-message-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.|
-|Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](overview-of-device-message-processing-stages.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-message-processing-stages.md#persist) by the MedTech service.|
-|Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](overview-of-device-message-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.|
-|Traffic|**Number of Measurements**|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-message-processing-stages.md#transform) of the MedTech service.|
+|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](overview-of-device-data-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](overview-of-device-data-processing-stages.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of Fhir resources saved|The total number of FHIR resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.|
+|Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](overview-of-device-data-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|**Number of Measurements**|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-data-processing-stages.md#transform) of the MedTech service.|
|Traffic|**Number of Message Groups**|The number of groups that have messages aggregated in the designated time window.| |Traffic|**Number of Normalized Messages**|The number of normalized messages.|
healthcare-apis Overview Of Device Data Processing Stages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-data-processing-stages.md
+
+ Title: Overview of the MedTech service device data processing stages - Azure Health Data Services
+description: This article provides an overview of the MedTech service device data processing stages. The MedTech service ingests, normalizes, groups, transforms, and persists device message data in the FHIR service.
+++++ Last updated : 04/14/2023+++
+# Overview of the MedTech service device data processing stages
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+This article provides an overview of the device data processing stages within the [MedTech service](overview.md). The MedTech service transforms device data into [FHIR Observations](https://www.hl7.org/fhir/observation.html) for persistence in the [FHIR service](../fhir/overview.md).
+
+The MedTech service device data processing follows these stages and in this order:
+
+* Ingest
+* Normalize - Device mapping applied.
+* Group - (Optional)
+* Transform - FHIR destination mapping applied.
+* Persist
++
+## Ingest
+Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume device messages asynchronously, removing the need for devices to wait while device messages are processed. The MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) are used for secure access to the event hub.
+
+> [!NOTE]
+> JSON is the only supported format at this time for device message data.
+
+> [!IMPORTANT]
+> If you're going to allow access from multiple services to the event hub, it's required that each service has its own event hub consumer group.
+>
+> Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+>
+> Examples:
+>
+> - Two MedTech services accessing the same event hub.
+>
+> - A MedTech service and a storage writer application accessing the same event hub.
+
+## Normalize
+Normalize is the next stage where device data is processed using the user-selected/user-created conforming and valid [device mapping](overview-of-device-mapping.md). This mapping process results in transforming device data into a normalized schema. The normalization process not only simplifies device data processing at later stages, but also provides the capability to project one device message into multiple normalized messages. For instance, a device could send multiple vital signs for body temperature, pulse rate, blood pressure, and respiration rate in a single device message. This device message would create four separate FHIR Observations. Each FHIR Observation would represent a different vital sign, with the device message projected into four different normalized messages.
+
+## Group - (Optional)
+Group is the next *optional* stage where the normalized messages available from the MedTech service normalization stage are grouped using three different parameters:
+
+* Device identity
+* Measurement type
+* Time period
+
+Device identity and measurement type grouping are optional and enabled by the use of the [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. The SampledData measurement type provides a concise way to represent a time-based series of measurements from a device message into FHIR Observations. When you use the SampledData measurement type, measurements can be grouped into a single FHIR Observation that represents a 1-hour period or a 24-hour period.
+
+## Transform
+Transform is the next stage where normalized messages are processed using the user-selected/user-created conforming and valid [FHIR destination mapping](how-to-configure-fhir-mappings.md). Normalized messages get transformed into FHIR Observations if a matching FHIR destination mapping has been authored. At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the device message. These resources are added as a reference to the FHIR Observation being created.
+
+> [!NOTE]
+> All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients, it is advised you create a virtual device resource that is specific to the patient and send the virtual device identifier in the device message payload. The virtual device can be linked to the actual device resource as a parent.
+
+If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [**Resolution type**](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to **Lookup**, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to **Create**, the MedTech service creates minimal Device and Patient resources in the FHIR service.
+
+> [!NOTE]
+> The **Resolution type** can also be adjusted post deployment of the MedTech service if a different **Resolution type** is later required.
+
+The MedTech service provides near real-time processing and also attempts to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after approximately five minutes. When there's fewer than 300 normalized messages to be processed, there may be a delay of approximately five minutes before FHIR Observations are created or updated in the FHIR service.
+
+> [!NOTE]
+> When multiple device messages contain data for the same FHIR Observation, have the same timestamp, and are sent within the same device message batch (for example, within the five minute window or in groups of 300 normalized messages), only the data corresponding to the latest device message for that FHIR Observation is persisted.
+>
+> For example:
+>
+> Device message 1:
+> ```json
+> {   
+> "patientid": "testpatient1",   
+> "deviceid": "testdevice1",
+> "systolic": "129",   
+> "diastolic": "65",   
+> "measurementdatetime": "2022-02-15T04:00:00.000Z"
+> } 
+> ```
+>
+> Device message 2:
+> ```json
+> {   
+> "patientid": "testpatient1",   
+> "deviceid": "testdevice1",   
+> "systolic": "113",   
+> "diastolic": "58",   
+> "measurementdatetime": "2022-02-15T04:00:00.000Z"
+> }
+> ```
+>
+> Assuming these device messages were ingested within the same five minute window or in the same group of 300 normalized messages, and since the `measurementdatetime` is the same for both device messages (indicating these contain data for the same FHIR Observation), only device message 2 is persisted to represent the latest/most recent data.
+
+## Persist
+Persist is the final stage where the FHIR Observations from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation is new, it's created in the FHIR service. If the FHIR Observation already existed, it gets updated in the FHIR service. The FHIR service uses the MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for secure access to the FHIR service.
+
+## Next steps
+
+In this article, you learned about the MedTech service device message processing and persistence in the FHIR service.
+
+To get an overview of the MedTech service device and FHIR destination mappings, see
+
+> [!div class="nextstepaction"]
+> [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
+
+> [!div class="nextstepaction"]
+> [Overview of the MedTech service FHIR destination mapping](how-to-configure-fhir-mappings.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview Of Device Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-mapping.md
Previously updated : 04/03/2023 Last updated : 04/14/2023
This article provides an overview of the MedTech service device mapping.
-The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The device mapping is the first type and controls mapping values in the device message data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR destination mapping](how-to-configure-fhir-mappings.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html).
+The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The device mapping is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR destination mapping](how-to-configure-fhir-mappings.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html).
> [!NOTE]
-> The device and FHIR destination mappings are re-evaluated each time a message is processed. Any updates to either mapping will take effect immediately.
+> The device and FHIR destination mappings are re-evaluated each time a device message is processed. Any updates to either mapping will take effect immediately.
## Device mapping basics The device mapping contains collections of expression templates used to extract device message data into an internal, normalized format for further evaluation. Each device message received is evaluated against **all** expression templates in the collection. This evaluation means that a single device message can be separated into multiple outbound messages that can be mapped to multiple FHIR Observations in the FHIR service. > [!TIP]
-> For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+> For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
This diagram provides an illustration of what happens during the normalization stage within the MedTech service.
You can use these template types within CollectionContent depending on your use
and/or -- [IotJsonPathContent](how-to-use-iotjsonpathcontenttemplate-mappings.md) for device messages being routed through [Azure IoT Hub](/azure/iot-hub/iot-concepts-and-iot-hub) to your MedTech service event hub. IotJsonPathContent supports [JSONPath](https://goessner.net/articles/JsonPath/).
+- [IotJsonPathContent](how-to-use-iotjsonpathcontent-mappings.md) for device messages being routed through [Azure IoT Hub](/azure/iot-hub/iot-concepts-and-iot-hub) to your MedTech service event hub. IotJsonPathContent supports [JSONPath](https://goessner.net/articles/JsonPath/).
:::image type="content" source="media/overview-of-device-mapping/device-mapping-templates-diagram.png" alt-text="Diagram showing MedTech service device mapping templates architecture." lightbox="media/overview-of-device-mapping/device-mapping-templates-diagram.png":::
The resulting normalized message will look like this after the normalization sta
When the MedTech service is processing the device message, the templates in the CollectionContent are used to evaluate the message. The `typeMatchExpression` is used to determine whether or not the template should be used to create a normalized message from the device message. If the `typeMatchExpression` evaluates to true, then the `deviceIdExpression`, `timestampExpression`, and `valueExpression` values are used to locate and extract the JSON values from the device message and create a normalized message. In this example, all expressions are written in JSONPath, however, it would be valid to write all the expressions in JMESPath. It's up to the template author to determine which expression language is most appropriate. > [!TIP]
-> See [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md) for assistance fixing common MedTech service deployment errors.
+> For assistance fixing common MedTech service deployment errors, see [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md).
>
-> See [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md) for assistance fixing MedTech service errors.
+> For assistance fixing MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md).
## Next steps
To learn how to use CalculatedContent with the MedTech service device mapping, s
To learn how to use IotJsonPathContent with the MedTech service device mapping, see > [!div class="nextstepaction"]
-> [How to use IotJsonPathContent with the MedTech service device mapping](how-to-use-iotjsonpathcontenttemplate-mappings.md)
+> [How to use IotJsonPathContent with the MedTech service device mapping](how-to-use-iotjsonpathcontent-mappings.md)
To learn how to use custom functions with the MedTech service device mapping, see
healthcare-apis Overview Of Fhir Destination Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-fhir-destination-mapping.md
+
+ Title: Overview of the MedTech service FHIR destination mapping - Azure Health Data Services
+description: This article provides an overview of the MedTech service FHIR destination mapping.
++++ Last updated : 04/14/2023+++
+# Overview of the MedTech service FHIR destination mapping
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+This article provides an overview of the MedTech service FHIR destination mapping.
+
+The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The [device mapping](overview-of-device-mapping.md) is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The FHIR destination mapping is the second type and controls how the normalized data is mapped to [FHIR Observations](https://www.hl7.org/fhir/observation.html).
+
+> [!NOTE]
+> The device and FHIR destination mappings are re-evaluated each time a device message is processed. Any updates to either mapping will take effect immediately.
+
+## FHIR destination mapping basics
+
+The FHIR destination mapping controls how the data extracted from a device message is mapped into a FHIR observation.
+
+- Should an observation be created for a point in time or over a period of an hour?
+- What codes should be added to the observation?
+- Should the value be represented as [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) or a [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity)?
+
+These data types are all options the FHIR destination mapping configuration controls.
+
+Once a device message is transformed into a normalized data model, the data is collected for transformation to a [FHIR Observation](https://www.hl7.org/fhir/observation.html). If the Observation type is [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), the data is grouped according to device identifier, measurement type, and time period (time period can be either 1 hour or 24 hours). The output of this grouping is sent for conversion into a single [FHIR Observation](https://www.hl7.org/fhir/observation.html) that represents the time period for that data type. For other Observation types ([Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept) and [string](https://www.hl7.org/fhir/datatypes.html#string)) data is not grouped, but instead each measurement is transformed into a single Observation representing a point in time.
+
+> [!TIP]
+> For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+
+This diagram provides an illustration of what happens during the transformation stage within the MedTech service.
++
+> [!NOTE]
+> The FHIR Observation in this diagram is not the complete resource. See [Example](#example) in this overview for the entire FHIR Observation.
+
+## FHIR destination mapping validations
+
+The validation process validates the FHIR destination mapping before allowing them to be saved for use. These elements are required in the FHIR destination mapping.
+
+**FHIR destination mapping**
+
+|Element|Required|
+|:|:-|
+|typeName|True|
+
+> [!NOTE]
+> The 'typeName' element is used to link a FHIR destination mapping template to one or more device mapping templates. Device mapping templates with the same 'typeName' element generate normalized data that will be evaluated with a FHIR destination mapping template that has the same 'typeName'.
+
+## CollectionFhir
+
+CollectionFhir is the root template type used by the MedTech service FHIR destination mapping. CollectionFhir is a list of all templates that are used during the transformation stage. You can define one or more templates within CollectionFhir, with each normalized message evaluated against all templates.
+
+### CodeValueFhir
+
+CodeValueFhir is currently the only template supported in FHIR destination mapping at this time. It allows you to define codes, the effective period, and the value of the observation. Multiple value types are supported: [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept), [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), and [String](https://www.hl7.org/fhir/datatypes.html#primitive). Along with these configurable values, the identifier for the Observation resource and linking to the proper Device and Patient resources are handled automatically.
+
+> [!NOTE]
+>
+
+|Property|Description|
+|:-|--|
+|**typeName**| The type of measurement this template should bind to. There should be at least one Device mapping template that outputs this type.
+|**periodInterval**|The period of time the observation created should represent. Supported values are 0 (an instance), 60 (an hour), 1440 (a day). Note: `periodInterval` is required when the Observation type is "SampledData" and is ignored for any other Observation types.
+|**category**|Any number of [CodeableConcepts](http://hl7.org/fhir/datatypes-definitions.html#codeableconcept) to classify the type of observation created.
+|**codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created.
+|**codes[].code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
+|**codes[].system**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
+|**codes[].display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
+|**value**|The value to extract and represent in the observation. For more information, see [Value type codes](#value-type-codes).
+|**components**|*Optional:* One or more components to create on the observation.
+|**components[].codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the component.
+|**components[].value**|The value to extract and represent in the component. For more information, see [Value type codes](#value-type-codes).
++
+### Value type codes
+
+The supported value type codes for the MedTech service FHIR destination mapping:
+
+### SampledData
+
+Represents the [SampledData](http://hl7.org/fhir/datatypes.html#SampledData) FHIR data type. Observation measurements are written to a value stream starting at a point in time and incrementing forward using the period defined. If no value is present, an `E` is written into the data stream. If the period is such that two more values occupy the same position in the data stream, the latest value is used. The same logic is applied when an observation using the SampledData is updated.
+
+| Property | Description
+| |
+|**DefaultPeriod**|The default period in milliseconds to use.
+|**Unit**|The unit to set on the origin of the SampledData.
+
+### Quantity
+
+Represents the [Quantity](http://hl7.org/fhir/datatypes.html#Quantity) FHIR data type. This type creates a single, point in time, Observation. If a new value arrives that contains the same device identifier, measurement type, and timestamp, the previous Observation is updated to the new value.
+
+| Property | Description
+| |
+|**Unit**| Unit representation.
+|**Code**| Coded form of the unit.
+|**System**| System that defines the coded unit form.
+
+### CodeableConcept
+
+Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConcept) FHIR data type. The value in the normalized data model isn't used, and instead when this type of data is received, an Observation is created with a specific code representing that an observation was recorded at a specific point in time.
+
+| Property | Description
+| |
+|**Text**|Plain text representation.
+|**Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created.
+|**Codes[].Code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
+|**Codes[].System**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
+|**Codes[].Display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).
+
+### String
+
+Represents the [string](https://www.hl7.org/fhir/datatypes.html#string) FHIR data type. This type creates a single, point in time, Observation. If new value arrives that contains the same device identifier, measurement type, and timestamp, the previous Observation is updated to the new value.
+
+### Example
+
+> [!TIP]
+> You can use the MedTech service [Mapping debugger](how-to-use-mapping-debugger.md) for assistance creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
+
+> [!NOTE]
+> This example and normalized message is a continuation from [Overview of the MedTech service device mapping](overview-of-device-mapping.md#example).
+
+In this example, we're using a normalized message capturing `heartRate` data:
+
+```json
+[
+ {
+ "type": "heartrate",
+ "occurrenceTimeUtc": "2023-03-13T22:46:01.875Z",
+ "deviceId": "device01",
+ "properties": [
+ {
+ "name": "hr",
+ "value": "78"
+ }
+ ]
+ }
+]
+```
+
+We're using this FHIR destination mapping for the transformation stage:
+
+```json
+{
+ "templateType": "CollectionFhir",
+ "template": [
+ {
+ "templateType": "CodeValueFhir",
+ "template": {
+ "codes": [
+ {
+ "code": "8867-4",
+ "system": "http://loinc.org",
+ "display": "Heart rate"
+ }
+ ],
+ "typeName": "heartrate",
+ "value": {
+ "system": "http://unitsofmeasure.org",
+ "code": "count/min",
+ "unit": "count/min",
+ "valueName": "hr",
+ "valueType": "Quantity"
+ }
+ }
+ }
+ ]
+}
+
+```
+
+The resulting FHIR Observation will look like this after the transformation stage:
+
+```json
+[
+ {
+ "code": {
+ "coding": [
+ {
+ "system": {
+ "value": "http://loinc.org"
+ },
+ "code": {
+ "value": "8867-4"
+ },
+ "display": {
+ "value": "Heart rate"
+ }
+ }
+ ],
+ "text": {
+ "value": "heartrate"
+ }
+ },
+ "effective": {
+ "start": {
+ "value": "2023-03-13T22:46:01.8750000Z"
+ },
+ "end": {
+ "value": "2023-03-13T22:46:01.8750000Z"
+ }
+ },
+ "issued": {
+ "value": "2023-04-05T21:02:59.1650841+00:00"
+ },
+ "value": {
+ "value": {
+ "value": 78
+ },
+ "unit": {
+ "value": "count/min"
+ },
+ "system": {
+ "value": "http://unitsofmeasure.org"
+ },
+ "code": {
+ "value": "count/min"
+ }
+ }
+ }
+]
+```
+
+> [!TIP]
+> For assistance fixing common MedTech service deployment errors, see [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md).
+>
+> For assistance fixing MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md).
+
+## Next steps
+
+In this article, you've been provided an overview of the MedTech service FHIR destination mapping.
+
+To get an overview of the MedTech service device mapping, see
+
+> [!div class="nextstepaction"]
+> [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
+
+To learn how to use CalculatedContent with the MedTech service device mapping, see
+
+> [!div class="nextstepaction"]
+> [How to use CalculatedContent with the MedTech service device mapping](how-to-use-calculatedcontent-mappings.md)
+
+To learn how to use IotJsonPathContent with the MedTech service device mapping, see
+
+> [!div class="nextstepaction"]
+> [How to use IotJsonPathContent with the MedTech service device mapping](how-to-use-iotjsonpathcontenttemplate-mappings.md)
+
+To learn how to use custom functions with the MedTech service device mapping, see
+
+> [!div class="nextstepaction"]
+> [How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
Previously updated : 04/13/2023 Last updated : 04/14/2023
The MedTech service processes device data in five stages:
4. **Transform** - When the normalized data is grouped, it's transformed through the FHIR destination mapping and is ready to become FHIR Observations.
-5. **Persist** - After the transformation is done, the new data is sent to FHIR service and persisted as FHIR Observations.
+5. **Persist** - After the transformation is done, the new data is sent to the FHIR service and persisted as FHIR Observations.
## Key features of the MedTech service
The MedTech service delivers your device data into FHIR service, ensuring that y
### Configurable
-The MedTech service can be customized and configured by using [device](how-to-configure-device-mappings.md) and [FHIR destination](how-to-configure-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR Observations.
+The MedTech service can be customized and configured by using [device](overview-of-device-mapping.md) and [FHIR destination](how-to-configure-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR Observations.
Useful options could include:
In this article, you learned about the MedTech service and its capabilities.
To learn about how the MedTech service processes device data, see > [!div class="nextstepaction"]
-> [Overview of the MedTech service device data processing stages](overview-of-device-message-processing-stages.md)
+> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
To learn about the different deployment methods for the MedTech service, see
healthcare-apis Troubleshoot Errors Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md
Previously updated : 02/28/2023 Last updated : 04/14/2023
Here's a list of errors that can be found in the Azure Resource Manager (ARM) AP
**Fix**: Set the `location` property of the FHIR destination in your ARM template to the same value as the parent MedTech service's `location` property. > [!NOTE]
-> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message, [device mapping, and FHIR destination mapping](how-to-create-mappings-copies.md) to your request to better help with issue determination.
+> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message, [device and FHIR destination mappings](how-to-use-mapping-debugger.md#overview-of-the-mapping-debugger) to your request to better help with issue determination.
## Next steps
healthcare-apis Troubleshoot Errors Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-logs.md
Previously updated : 04/04/2023 Last updated : 04/14/2023
This property represents the operation being performed by the MedTech service wh
|FHIRConversion|The data flow stage where the grouped-normalized data is transformed into an Observation resource.| > [!NOTE]
-> To learn about the MedTech service device message data transformation, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+> To learn about the MedTech service device message data transformation, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
## MedTech service health check exceptions and fixes
The expression and line with the error are specified in the error message.
**Fix**: On the Azure portal, go to your FHIR service, and assign the **FHIR Data Writer** role to your MedTech service (see [step-by-step instructions](deploy-new-deploy.md#grant-access-to-the-fhir-service)). > [!NOTE]
-> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket and attach copies of your device message, [device mapping, and FHIR destination mapping](how-to-create-mappings-copies.md) to your request to better help with issue determination.
+> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message, [device and FHIR destination mappings](how-to-use-mapping-debugger.md#overview-of-the-mapping-debugger) to your request to better help with issue determination.
## Next steps
iot-develop Concepts Using C Sdk And Embedded C Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-using-c-sdk-and-embedded-c-sdk.md
description: Helps developers decide which C-based Azure IoT device SDK to use f
-+ Last updated 09/16/2022-+ #Customer intent: As a device developer, I want to understand when to use the Azure IoT C SDK or the Embedded C SDK to optimize device and application performance.
iot-hub-device-update Connected Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-configure.md
- Title: Configure Microsoft Connected Cache for Device Update for Azure IoT Hub-
-description: Overview of Microsoft Connected Cache for Device Update for Azure IoT Hub
-- Previously updated : 08/19/2022----
-# Configure Microsoft Connected Cache for Device Update for IoT Hub
-
-> [!NOTE]
-> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
-
-Microsoft Connected Cache (MCC) is deployed to Azure IoT Edge gateways as an IoT Edge module. Like other IoT Edge modules, environment variables and container create options are used to configure MCC modules. This article defines the environment variables and container create options that are required for a customer to successfully deploy the Microsoft Connected Cache module for use by Device Update for IoT Hub.
-
-## Module deployment details
-
-There's no naming requirement for the Microsoft Connected Cache module since no other module or service interactions rely on the name of the MCC module for communication. Additionally, the parent-child relationship of the Microsoft Connected Cache servers isn't dependent on this module name, but rather the FQDN or IP address of the IoT Edge gateway.
-
-Microsoft Connected Cache module environment variables are used to pass basic module identity information and functional module settings to the container.
-
-| Variable name | Value format | Description |
-|--|--|--|--|
-| CUSTOMER_ID | Azure subscription ID GUID | Required <br><br> This is the customer's key, which provides secure authentication of the cache node to Delivery Optimization services. |
-| CACHE_NODE_ID | Cache node ID GUID | Required <br><br> Uniquely identifies the MCC node to Delivery Optimization services. |
-| CUSTOMER_KEY | Customer Key GUID | Required <br><br> This is the customer's key, which provides secure authentication of the cache node to Delivery Optimization services. |
-| STORAGE_*N*_SIZE_GB (Where *N* is the cache drive) | Integer | Required <br><br> Specify up to nine drives to cache content and specify the maximum space in gigabytes to allocate for content on each cache drive. The number of the drive must match the cache drive binding values specified in the container create option MicrosoftConnectedCache*N* value.<br><br>Examples:<br>STORAGE_1_SIZE_GB = 150<br>STORAGE_2_SIZE_GB = 50<br><br>Minimum size of the cache is 10 GB. |
-| UPSTREAM_HOST | FQDN/IP | Optional <br><br> This value can specify an upstream MCC node that acts as a proxy if the Connected Cache node is disconnected from the internet. This setting is used to support the nested IoT scenario.<br><br>**Note:** MCC listens on http default port 80. |
-| UPSTREAM_PROXY | FQDN/IP:PORT | Optional <br><br> The outbound internet proxy. This could also be the OT DMZ proxy of an ISA 95 network. |
-| CACHEABLE_CUSTOM_*N*_HOST | HOST/IP<br>FQDN | Optional <br><br> Required to support custom package repositories. Repositories could be hosted locally or on the internet. There's no limit to the number of custom hosts that can be configured.<br><br>Examples:<br>Name = CACHEABLE_CUSTOM_1_HOST Value = packages.foo.com<br> Name = CACHEABLE_CUSTOM_2_HOST Value = packages.bar.com |
-| CACHEABLE_CUSTOM_*N*_CANONICAL | Alias | Optional <br><br> Required to support custom package repositories. This value can be used as an alias and will be used by the cache server to reference different DNS names. For example, repository content hostname may be packages.foo.com, but for different regions there could be an extra prefix that is added to the hostname like westuscdn.packages.foo.com and eastuscdn.packages.foo.com. By setting the canonical alias, you ensure that content isn't duplicated for content coming from the same host, but different CDN sources. The format of the canonical value isn't important, but it must be unique to the host. It may be easiest to set the value to match the host value.<br><br>Examples based on Custom Host examples above:<br>Name = CACHEABLE_CUSTOM_1_CANONICAL Value = foopackages<br> Name = CACHEABLE_CUSTOM_2_CANONICAL Value = packages.bar.com |
-| IS_SUMMARY_PUBLIC | True or False | Optional <br><br> Enables viewing of the summary report on the local network or internet. Use of an API key (discussed later) is required to view the summary report if set to true. |
-| IS_SUMMARY_ACCESS_UNRESTRICTED | True or False | Optional <br><br> Enables viewing of summary report on the local network or internet without use of API key from any device in the network. Use if you don't want to lock down access to viewing cache server summary data via the browser. |
-
-## Module container create options
-
-Container create options provide control of the settings related to storage and ports used by the Microsoft Connected Cache module.
-
-Sample container create options:
-
-```json
-{
- "HostConfig": {
- "Binds": [
- "/microsoftConnectedCache1/:/nginx/cache1/"
- ],
- "PortBindings": {
- "8081/tcp": [
- {
- "HostPort": "80"
- }
- ],
- "5000/tcp": [
- {
- "HostPort": "5100"
- }
- ]
- }
- }
-}
-```
-
-The following sections list the required container create variables used to deploy the MCC module.
-
-### HostConfig
-
-The `HostConfig` parameters are required to map the container storage location to the storage location on the disk. Up to nine locations can be specified.
-
->[!Note]
->The number of the drive must match the cache drive binding values specified in the environment variable STORAGE_*N*_SIZE_GB value, `/MicrosoftConnectedCache*N*/:/nginx/cache*N*/`.
-
-### PortBindings
-
-The `PortBindings` parameters map container ports to ports on the host device.
-
-The first port binding specifies the external machine HTTP port that MCC listens on for content requests. The default HostPort is port 80 and other ports aren't supported at this time as the ADU client makes requests on port 80 today. TCP port 8081 is the internal container port that the MCC listens on and can't be changed.
-
-The second port binding ensures that the container isn't listening on host port 5000. The Microsoft Connected Cache module has a .NET Core service, which is used by the caching engine for various functions. To support nested edge, the HostPort must not be set to 5000 because the registry proxy module is already listening on host port 5000.
-
-## Microsoft Connected Cache summary report
-
-The summary report is currently the only way for a customer to view caching data for the Microsoft Connected Cache instances deployed to IoT Edge gateways. The report is generated at 15-second intervals and includes averaged stats for the period and aggregated stats for the lifetime of the module. The key stats that customers will be interested in are:
-
-* **hitBytes** - The sum of bytes delivered that came directly from cache.
-* **missBytes** - The sum of bytes delivered that Microsoft Connected Cache had to download from CDN to see the cache.
-* **eggressBytes** - The sum of hitBytes and missBytes and is the total bytes delivered to clients.
-* **hitRatioBytes** - The ratio of hitBytes to egressBytes. For example, if 100% of eggressBytes delivered in a period were equal to the hitBytes, this value would be 1.
--
-The summary report is available at `http://<IoT Edge gateway>:5001/summary` Replace \<IoT Edge Gateway\> with the IP address or hostname of the IoT Edge gateway hosting the MCC module.
iot-hub-device-update Connected Cache Disconnected Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-disconnected-device-update.md
Title: Disconnected device update using Microsoft Connected Cache
-description: Understand support for disconnected device update using Microsoft Connected Cache
-
+description: Understand how the Microsoft Connected Cache module for Azure IoT Edge enables updating disconnected device with Device Update for Azure IoT Hub
+ Previously updated : 08/19/2022 Last updated : 04/14/2023
-# Understand support for disconnected device updates
+# Understand support for disconnected device updates (preview)
+
+The Microsoft Connected Cache (MCC) module for IoT Edge devices enables Device Update capabilities on disconnected devices behind gateways. In a transparent gateway scenario, one or more devices can pass their messages through a single gateway device that maintains the connection to Azure IoT Hub. In these cases, the child devices may not have internet connectivity or may not be allowed to download content from the internet. The MCC module provides Device Update for IoT Hub customers with the capability of an intelligent in-network cache. The cache enables image-based and package-based updates of Linux OS-based devices that are behind an IoT Edge gateway (also called *downstream* IoT devices). The cache also helps reduce the bandwidth used for updates.
> [!NOTE] > This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
-In a transparent gateway scenario, one or more devices can pass their messages through a single gateway device that maintains the connection to Azure IoT Hub. In these cases, the child devices may not have internet connectivity or may not be allowed to download content from the internet. The Microsoft Connected Cache preview IoT Edge module provides Device Update for IoT Hub customers with the capability of an intelligent in-network cache. The cache enables image-based and package-based updates of Linux OS-based devices behind an IoT Edge gateway (also called *downstream* IoT devices), and also helps reduce the bandwidth used for updates.
+If you aren't familiar with IoT Edge gateways, learn more about [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md).
-## Microsoft Connected Cache preview for Device Update for IoT Hub
+## What is Microsoft Connected Cache
-Microsoft Connected Cache is an intelligent, transparent cache for content published for Device Update for IoT Hub and can be customized to cache content from other sources like package repositories as well. Microsoft Connected Cache is a cold cache that is warmed by client requests for the exact file ranges requested by the Delivery Optimization client and doesn't pre-seed content. The diagram and step-by-step description below explains how Microsoft Connected Cache works within the Device Update infrastructure.
+Microsoft Connected Cache is an intelligent, transparent cache for content published for Device Update for IoT Hub and can be customized to cache content from other sources like package repositories as well. Microsoft Connected Cache is a cold cache that is warmed by client requests for the exact file ranges requested by the Delivery Optimization client and doesn't pre-seed content. The following diagram and step-by-step description explain how Microsoft Connected Cache works within the Device Update infrastructure.
>[!Note] >This flow assumes that the IoT Edge gateway has internet connectivity. For the downstream IoT Edge gateway (nested edge) scenario, the content delivery network (CDN) can be considered the MCC hosted on the parent IoT Edge gateway.
- :::image type="content" source="media/connected-cache-overview/disconnected-device-update.png" alt-text="Disconnected Device Update" lightbox="media/connected-cache-overview/disconnected-device-update.png":::
-1. Microsoft Connected Cache is deployed as an IoT Edge module to the on-premises server.
-2. Device Update for IoT Hub clients are configured to download content from Microsoft Connected Cache by virtue of either the GatewayHostName attribute of the device connection string for IoT leaf devices **or** the parent_hostname set in the config.toml for IoT Edge child devices.
-3. Device Update for IoT Hub clients receive update content download commands from the Device Update service and request update content from the Microsoft Connected Cache instead of the CDN. Microsoft Connected Cache listens on HTTP port 80 by default, and the Delivery Optimization client makes the content request on port 80 so the parent must be configured to listen on this port. Only the HTTP protocol is supported at this time.
+1. Microsoft Connected Cache is deployed as an IoT Edge module to the on-premises gateway server.
+2. Device Update for IoT Hub clients are configured to download content from Microsoft Connected Cache by using either the GatewayHostName attribute of the device connection string for IoT leaf devices **or** the parent_hostname set in the config.toml for IoT Edge child devices.
+3. Device Update for IoT Hub clients receive download commands from the Device Update service and request update content from the Microsoft Connected Cache instead of the CDN. Microsoft Connected Cache listens on HTTP port 80 by default, and the Delivery Optimization client makes the content request on port 80 so the parent must be configured to listen on this port. Only the HTTP protocol is supported at this time.
4. The Microsoft Connected Cache server downloads content from the CDN, seeds its local cache stored on disk and delivers the content to the Device Update client. >[!Note] >When using package-based updates, the Microsoft Connected Cache server will be configured by the admin with the required package hostname.
-5. Subsequent requests from other Device Update clients for the same update content will now come from cache and Microsoft Connected Cache won't make requests to the CDN for the same content.
+5. Subsequent requests from other Device Update clients for the same update content now come from cache and Microsoft Connected Cache won't make requests to the CDN for the same content.
### Supporting industrial IoT (IIoT) with parent/child hosting scenarios
-When a downstream or child IoT Edge gateway is hosting a Microsoft Connected Cache server, it will be configured to request update content from the parent IoT Edge gateway, also hosting a Microsoft Connected Cache server. This request is repeated for as many levels as necessary before reaching the parent IoT Edge gateway hosting a Microsoft Connected Cache server that has internet access. From the internet connected server, the content is requested from the CDN at which point the content is delivered back to the child IoT Edge gateway that originally requested the content. The content will be stored on disk at every level.
+Industrial IoT (IIoT) scenarios often involve multiple levels of IoT Edge gateways, with only the top level having internet access. In this scenario, each gateway hosts a Microsoft Connected Cache service that is configured to request update content from its parent gateway.
+
+When a child (or downstream) IoT Edge gateway makes a request for update content from its parent gateway, this request is repeated for as many levels as necessary before reaching the topmost IoT Edge gateway hosting a Microsoft Connected Cache server that has internet access. From the internet connected server, the content is requested from the CDN at which point the content is delivered back to the child IoT Edge gateway that originally requested the content. The content is stored on disk at every level.
## Request access to the preview The Microsoft Connected Cache IoT Edge module is released as a preview for customers who are deploying solutions using Device Update for IoT Hub. Access to the preview is by invitation. [Request Access](https://aka.ms/MCCForDeviceUpdateForIoT) to the Microsoft Connected Cache preview for Device Update for IoT Hub and provide the information requested if you would like access to the module.+
+## Microsoft Connected Cache module configuration
+
+Microsoft Connected Cache is deployed to Azure IoT Edge gateways as an IoT Edge module. Like other IoT Edge modules, environment variables and container create options are used to configure MCC modules. This section defines the environment variables and container create options that are required to successfully deploy the MCC module for use by Device Update for IoT Hub.
+
+There's no naming requirement for the Microsoft Connected Cache module since no other module or service interactions rely on the name of the MCC module for communication. Additionally, the parent-child relationship of the Microsoft Connected Cache servers isn't dependent on this module name, but rather the FQDN or IP address of the IoT Edge gateway.
+
+### Module environment variables
+
+Microsoft Connected Cache module environment variables are used to pass basic module identity information and functional module settings to the container.
+
+| Variable name | Value format | Description |
+|--|--|--|
+| CUSTOMER_ID | Azure subscription ID GUID | Required <br><br> This value is the customer's ID, which provides secure authentication of the cache node to Delivery Optimization services. |
+| CACHE_NODE_ID | Cache node ID GUID | Required <br><br> Uniquely identifies the MCC node to Delivery Optimization services. |
+| CUSTOMER_KEY | Customer Key GUID | Required <br><br> This value is the customer's key, which provides secure authentication of the cache node to Delivery Optimization services. |
+| STORAGE_*N*_SIZE_GB (Where *N* is the cache drive) | Integer | Required <br><br> Specify up to nine drives to cache content and specify the maximum space in gigabytes to allocate for content on each cache drive. The number of the drive must match the cache drive binding values specified in the container create option MicrosoftConnectedCache*N* value.<br><br>Examples:<br>STORAGE_1_SIZE_GB = 150<br>STORAGE_2_SIZE_GB = 50<br><br>Minimum size of the cache is 10 GB. |
+| UPSTREAM_HOST | FQDN/IP | Optional <br><br> This value can specify an upstream MCC node that acts as a proxy if the Connected Cache node is disconnected from the internet. This setting is used to support the nested IoT scenario.<br><br>**Note:** MCC listens on http default port 80. |
+| UPSTREAM_PROXY | FQDN/IP:PORT | Optional <br><br> The outbound internet proxy. This value could also be the OT DMZ proxy of an ISA 95 network. |
+| CACHEABLE_CUSTOM_*N*_HOST | HOST/IP<br>FQDN | Optional <br><br> Required to support custom package repositories. Repositories could be hosted locally or on the internet. There's no limit to the number of custom hosts that can be configured.<br><br>Examples:<br>Name = CACHEABLE_CUSTOM_1_HOST Value = packages.foo.com<br> Name = CACHEABLE_CUSTOM_2_HOST Value = packages.bar.com |
+| CACHEABLE_CUSTOM_*N*_CANONICAL | Alias | Optional <br><br> Required to support custom package repositories. This value can be used as an alias and will be used by the cache server to reference different DNS names. For example, repository content hostname may be packages.foo.com, but for different regions there could be an extra prefix that is added to the hostname like westuscdn.packages.foo.com and eastuscdn.packages.foo.com. By setting the canonical alias, you ensure that content isn't duplicated for content coming from the same host, but different CDN sources. The format of the canonical value isn't important, but it must be unique to the host. It may be easiest to set the value to match the host value.<br><br>Examples based on the previous custom host examples:<br>Name = CACHEABLE_CUSTOM_1_CANONICAL Value = foopackages<br> Name = CACHEABLE_CUSTOM_2_CANONICAL Value = packages.bar.com |
+| IS_SUMMARY_PUBLIC | True or False | Optional <br><br> Enables viewing of the summary report on the local network or internet. Use of an API key (discussed later) is required to view the summary report if set to true. |
+| IS_SUMMARY_ACCESS_UNRESTRICTED | True or False | Optional <br><br> Enables viewing of summary report on the local network or internet without use of API key from any device in the network. Use if you don't want to lock down access to viewing cache server summary data via the browser. |
+
+### Module container create options
+
+Container create options provide control of the settings related to storage and ports used by the Microsoft Connected Cache module.
+
+Sample container create options:
+
+```json
+{
+ "HostConfig": {
+ "Binds": [
+ "/microsoftConnectedCache1/:/nginx/cache1/"
+ ],
+ "PortBindings": {
+ "8081/tcp": [
+ {
+ "HostPort": "80"
+ }
+ ],
+ "5000/tcp": [
+ {
+ "HostPort": "5100"
+ }
+ ]
+ }
+ }
+}
+```
+
+The following sections list the required container create variables used to deploy the MCC module.
+
+#### HostConfig
+
+The `HostConfig` parameters are required to map the container storage location to the storage location on the disk. Up to nine locations can be specified.
+
+>[!Note]
+>The number of the drive must match the cache drive binding values specified in the environment variable STORAGE_*N*_SIZE_GB value, `/MicrosoftConnectedCache*N*/:/nginx/cache*N*/`.
+
+#### PortBindings
+
+The `PortBindings` parameters map container ports to ports on the host device.
+
+The first port binding specifies the external machine HTTP port that MCC listens on for content requests. The default HostPort is port 80 and other ports aren't supported at this time as the ADU client makes requests on port 80 today. TCP port 8081 is the internal container port that the MCC listens on and can't be changed.
+
+The second port binding ensures that the container isn't listening on host port 5000. The Microsoft Connected Cache module has a .NET Core service, which is used by the caching engine for various functions. To support nested edge, the HostPort must not be set to 5000 because the registry proxy module is already listening on host port 5000.
+
+## Microsoft Connected Cache summary report
+
+The summary report is currently the only way for a customer to view caching data for the Microsoft Connected Cache instances deployed to IoT Edge gateways. The report is generated at 15-second intervals and includes averaged stats for the period and aggregated stats for the lifetime of the module. The key stats that the report provides are:
+
+* **hitBytes** - The sum of bytes delivered that came directly from cache.
+* **missBytes** - The sum of bytes delivered that Microsoft Connected Cache had to download from CDN to see the cache.
+* **eggressBytes** - The sum of hitBytes and missBytes and is the total bytes delivered to clients.
+* **hitRatioBytes** - The ratio of hitBytes to egressBytes. For example, if 100% of eggressBytes delivered in a period were equal to the hitBytes, this value would be 1.
+
+The summary report is available at `http://<IoT Edge gateway>:5001/summary` Replace \<IoT Edge Gateway\> with the IP address or hostname of the IoT Edge gateway hosting the MCC module.
+
+## Next steps
+
+Learn how to implement Microsoft Connected Cache in [single gateways](./connected-cache-single-level.md) or [nested and industrial IoT gateways](./connected-cache-nested-level.md).
iot-hub-device-update Connected Cache Industrial Iot Nested https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-industrial-iot-nested.md
- Title: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration
-description: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration tutorial
-- Previously updated : 2/16/2021----
-# Microsoft Connected Cache preview deployment scenario sample: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration
-
-> [!NOTE]
-> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
-
-Manufacturing networks are often organized in hierarchical layers following the [Purdue network model](https://en.wikipedia.org/wiki/Purdue_Enterprise_Reference_Architecture) (included in the [ISA 95](https://en.wikipedia.org/wiki/ANSI/ISA-95) and [ISA 99](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa99) standards). In these networks, only the top layer has connectivity to the cloud and the lower layers in the hierarchy can only communicate with adjacent north and south layers.
-
-This GitHub sample, [Azure IoT Edge for Industrial IoT](https://github.com/Azure-Samples/iot-edge-for-iiot), deploys the following:
-
-* Simulated Purdue network in Azure
-* Industrial assets
-* Hierarchy of Azure IoT Edge gateways
-
-These components will be used to acquire industrial data and securely upload it to the cloud without compromising the security of the network. Microsoft Connected Cache can be deployed to support the download of content at all levels within the ISA 95 compliant network.
-
-The key to configuring Microsoft Connected Cache deployments within an ISA 95 compliant network is configuring both the OT proxy *and* the upstream host at the L3 IoT Edge gateway.
-
-1. Configure Microsoft Connected Cache deployments at the L5 and L4 levels as described in the Two-Level Nested IoT Edge gateway sample
-2. The deployment at the L3 IoT Edge gateway must specify:
-
- * UPSTREAM_HOST - The IP/FQDN of the L4 IoT Edge gateway, which the L3 Microsoft Connected Cache will request content.
- * UPSTREAM_PROXY - The IP/FQDN:PORT of the OT proxy server.
-
-3. The OT proxy must add the L4 MCC FQDN/IP address to the allowlist.
-
-To validate that Microsoft Connected Cache is functioning properly, execute the following command in the terminal of the IoT Edge device, hosting the module, or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. (see environment variable details for information on visibility of this report).
-
-```bash
- wget http://<L3 IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com
-```
iot-hub-device-update Connected Cache Nested Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-nested-level.md
Title: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy
+ Title: Deploy Microsoft Connected Cache on nested gateways
-description: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy tutorial
-
+description: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy
+ Previously updated : 2/16/2021 Last updated : 04/14/2023
-# Microsoft Connected Cache preview deployment scenario sample: Two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy
+# Deploy the Microsoft Connected Cache module on nested gateways, including in IIoT scenarios (preview)
+
+The Microsoft Connected Cache module supports nested, or hierarchical gateways, in which one or more IoT Edge gateway devices are behind a single gateway that has access to the internet. This article describes a deployment scenario sample that has two nested Azure IoT Edge gateway devices (a *parent gateway* and a *child gateway*) with outbound unauthenticated proxy.
> [!NOTE] > This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
-The diagram below describes the scenario where one Azure IoT Edge gateway has direct access to CDN resources and is acting as the parent to another Azure IoT Edge gateway. The child IoT Edge gateway is acting as the parent to an Azure IoT leaf device such as a Raspberry Pi. Both the Azure IoT Edge child and Azure IoT device are internet isolated. The example below demonstrates the configuration for two-levels of Azure IoT Edge gateways, but there is no limit to the depth of upstream hosts that Microsoft Connected Cache will support. There is no difference in Microsoft Connected Cache container create options from the previous examples.
+The following diagram describes the scenario where one Azure IoT Edge gateway has direct access to CDN resources and is acting as the parent to another Azure IoT Edge gateway. The child IoT Edge gateway is acting as the parent to an IoT leaf device such as a Raspberry Pi. Both the IoT Edge child gateway and the IoT device are internet isolated. This example demonstrates the configuration for two levels of Azure IoT Edge gateways, but there's no limit to the depth of upstream hosts that Microsoft Connected Cache will support.
+
-Refer to the documentation [Connect downstream IoT Edge devices - Azure IoT Edge](../iot-edge/how-to-connect-downstream-iot-edge-device.md?preserve-view=true&tabs=azure-portal&view=iotedge-2020-11) for more details on configuring layered deployments of Azure IoT Edge gateways. Additionally note that when deploying Azure IoT Edge, Microsoft Connected Cache, and custom modules, all modules must reside in the same container registry.
+Refer to the documentation [Connect downstream IoT Edge devices](../iot-edge/how-to-connect-downstream-iot-edge-device.md) for more details on configuring layered deployments of Azure IoT Edge gateways. Additionally note that when deploying Azure IoT Edge, Microsoft Connected Cache, and custom modules, all modules must reside in the same container registry.
>[!Note] >When deploying Azure IoT Edge, Microsoft Connected Cache, and custom modules, all modules must reside in the same container registry.
- :::image type="content" source="media/connected-cache-overview/nested-level-proxy.png" alt-text="Microsoft Connected Cache Nested" lightbox="media/connected-cache-overview/nested-level-proxy.png":::
- ## Parent gateway configuration
-1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub (see [Support for Disconnected Devices](connected-cache-disconnected-device-update.md) for details on how to get the module).
-2. Add the environment variables for the deployment. Below is an example of the environment variables.
-
- **Environment Variables**
-
- | Name | Value |
- | -- | -|
- | CACHE_NODE_ID | See [environment variable](connected-cache-configure.md) descriptions |
- | CUSTOMER_ID | See [environment variable](connected-cache-configure.md) descriptions |
- | CUSTOMER_KEY | See [environment variable](connected-cache-configure.md) descriptions |
- | STORAGE_1_SIZE_GB | 10 |
- | CACHEABLE_CUSTOM_1_HOST | Packagerepo.com:80 |
- | CACHEABLE_CUSTOM_1_CANONICAL | Packagerepo.com |
- | IS_SUMMARY_ACCESS_UNRESTRICTED| true |
-
-3. Add the container create options for the deployment. There is no difference in MCC container create options from the previous example. Below is an example of the container create options.
-
-### Container create options
-
-```json
-{
- "HostConfig": {
- "Binds": [
- "/MicrosoftConnectedCache1/:/nginx/cache1/"
- ],
- "PortBindings": {
- "8081/tcp": [
- {
- "HostPort": "80"
- }
- ],
- "5000/tcp": [
- {
- "HostPort": "5100"
- }
- ]
- }
- }
-}
-```
+
+Use the following steps to configure the Microsoft Connected Cache module on the parent gateway device.
+
+1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub (see [Support for disconnected devices](connected-cache-disconnected-device-update.md) for details on how to request access to the preview module).
+2. Add the environment variables for the deployment. The following table is an example of the environment variables:
+
+ | Name | Value |
+ | - | -- |
+ | CACHE_NODE_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | CUSTOMER_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | CUSTOMER_KEY | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | STORAGE_1_SIZE_GB | 10 |
+ | CACHEABLE_CUSTOM_1_HOST | Packagerepo.com:80 |
+ | CACHEABLE_CUSTOM_1_CANONICAL | Packagerepo.com |
+ | IS_SUMMARY_ACCESS_UNRESTRICTED | true |
+
+3. Add the container create options for the deployment. There's no difference in MCC container create options for single or nested gateways. The following example shows the container create options for the MCC module:
+
+ ```json
+ {
+ "HostConfig": {
+ "Binds": [
+ "/MicrosoftConnectedCache1/:/nginx/cache1/"
+ ],
+ "PortBindings": {
+ "8081/tcp": [
+ {
+ "HostPort": "80"
+ }
+ ],
+ "5000/tcp": [
+ {
+ "HostPort": "5100"
+ }
+ ]
+ }
+ }
+ }
+ ```
## Child gateway configuration
+Use the following steps to configure the Microsoft Connected Cache module on the child gateway device.
+ >[!Note]
->If you have replicated containers used in your configuration in your own private registry, then there will need to be a modification to the config.toml settings and runtime settings in your module deployment. For more information, refer to [Connect downstream IoT Edge devices - Azure IoT Edge](../iot-edge/how-to-connect-downstream-iot-edge-device.md?preserve-view=true&tabs=azure-portal&view=iotedge-2020-11#deploy-modules-to-lower-layer-devices) for more details.
+>If you have replicated containers used in your configuration in your own private registry, then there will need to be a modification to the config.toml settings and runtime settings in your module deployment. For more information, see [Connect downstream IoT Edge devices](../iot-edge/how-to-connect-downstream-iot-edge-device.md#deploy-modules-to-lower-layer-devices).
+1. Modify the image path for the IoT Edge agent as demonstrated in the example below:
-1. Modify the image path for the Edge agent as demonstrated in the example below:
+ ```markdown
+ [agent]
+ name = "edgeAgent"
+ type = "docker"
+ env = {}
+ [agent.config]
+ image = "<parent_device_fqdn_or_ip>:8000/iotedge/azureiotedge-agent:1.2.0-rc2"
+ auth = {}
+ ```
- ```markdown
- [agent]
- name = "edgeAgent"
- type = "docker"
- env = {}
- [agent.config]
- image = "<parent_device_fqdn_or_ip>:8000/iotedge/azureiotedge-agent:1.2.0-rc2"
- auth = {}
- ```
-2. Modify the Edge Hub and Edge agent Runtime Settings in the Azure IoT Edge deployment as demonstrated in this example:
-
- * Under Edge Hub, in the image field, enter ```$upstream:8000/iotedge/azureiotedge-hub:1.2.0-rc2```
- * Under Edge Agent, in the image field, enter ```$upstream:8000/iotedge/azureiotedge-agent:1.2.0-rc2```
+2. Modify the IoT Edge hub and agent runtime settings in the IoT Edge deployment as demonstrated in this example:
+
+ * For the IoT Edge hub image, enter `$upstream:8000/iotedge/azureiotedge-hub:1.2.0-rc2`
+ * For the IoT Edge agent image, enter `$upstream:8000/iotedge/azureiotedge-agent:1.2.0-rc2`
3. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub.
- * Choose a name for your module: ```ConnectedCache```
- * Modify the Image URI: ```$upstream:8000/mcc/linux/iot/mcc-ubuntu-iot-amd64:latest```
+ * Choose a name for your module: `ConnectedCache`
+ * Modify the image URI: `$upstream:8000/mcc/linux/iot/mcc-ubuntu-iot-amd64:latest`
4. Add the same set of environment variables and container create options used in the parent deployment.
->[!Note]
->The CACHE_NODE_ID shoudl be unique. The CUSTOMER_ID and CUSTOMER_KEY values will be identical to the parent. (see [Configure Microsoft Connected Cache](connected-cache-configure.md)
-For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. (see environment variable details for information on visibility of this report).
+ >[!Note]
+ >The CACHE_NODE_ID should be unique. The CUSTOMER_ID and CUSTOMER_KEY values will be identical to the parent. For more information, see [Module environment variables](connected-cache-disconnected-device-update.md#module-environment-variables).
+
+For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. For information on the visibility of this report, see [Microsoft Connected Cache summary report](./connected-cache-disconnected-device-update.md#microsoft-connected-cache-summary-report).
+
+```bash
+wget http://<CHILD Azure IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com
+```
+
+## Industrial IoT (IIoT) configuration
+
+Manufacturing networks are often organized in hierarchical layers following the [Purdue network model](https://en.wikipedia.org/wiki/Purdue_Enterprise_Reference_Architecture) (included in the [ISA 95](https://en.wikipedia.org/wiki/ANSI/ISA-95) and [ISA 99](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa99) standards). In these networks, only the top layer has connectivity to the cloud and the lower layers in the hierarchy can only communicate with adjacent north and south layers.
+
+This GitHub sample, [Azure IoT Edge for Industrial IoT](https://github.com/Azure-Samples/iot-edge-for-iiot), deploys the following components:
+
+* Simulated Purdue network in Azure
+* Industrial assets
+* Hierarchy of Azure IoT Edge gateways
+
+These components will be used to acquire industrial data and securely upload it to the cloud without compromising the security of the network. Microsoft Connected Cache can be deployed to support the download of content at all levels within the ISA 95 compliant network.
+
+The key to configuring Microsoft Connected Cache deployments within an ISA 95 compliant network is configuring both the OT proxy *and* the upstream host at the L3 IoT Edge gateway.
+
+1. Configure Microsoft Connected Cache deployments at the L5 and L4 levels as described in the Two-Level Nested IoT Edge gateway sample
+2. The deployment at the L3 IoT Edge gateway must specify:
+
+ * UPSTREAM_HOST - The IP/FQDN of the L4 IoT Edge gateway, which the L3 Microsoft Connected Cache will request content.
+ * UPSTREAM_PROXY - The IP/FQDN:PORT of the OT proxy server.
+
+3. The OT proxy must add the L4 MCC FQDN/IP address to the allowlist.
+
+To validate that Microsoft Connected Cache is functioning properly, execute the following command in the terminal of the IoT Edge device hosting the module, or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. For information on the visibility of this report, see [Microsoft Connected Cache summary report](./connected-cache-disconnected-device-update.md#microsoft-connected-cache-summary-report).
```bash
- wget http://<CHILD Azure IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com
-```
+wget http://<L3 IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com
+```
iot-hub-device-update Connected Cache Single Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-single-level.md
Title: Microsoft Connected Cache preview deployment scenario samples
+ Title: Deploy Microsoft Connected Cache on a gateway
-description: Microsoft Connected Cache preview deployment scenario samples tutorials
-
+description: Update disconnected devices with Device Update using the Microsoft Connected Cache module on IoT Edge gateways
+ Previously updated : 2/16/2021 Last updated : 04/14/2023
-# Microsoft Connected Cache preview deployment scenario samples
+# Deploy the Microsoft Connected Cache module on a single gateway (preview)
+
+The Microsoft Connected Cache (MCC) module for IoT Edge gateways enables Device Update for disconnected devices behind the gateway. This article introduces two different configurations for deploying the MCC module on an IoT Edge gateway.
+
+If you have multiple IoT Edge gateways chained together, refer to the instructions in [Deploy the Microsoft Connected Cache module on nested gateways](./connected-cache-nested-level.md).
> [!NOTE] > This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
-## Single level Azure IoT Edge gateway no proxy
-
-The diagram below describes the scenario where an Azure IoT Edge gateway that has direct access to CDN resources and there is an Azure IoT leaf device such as a Raspberry PI that is an internet isolated child devices of the Azure IoT Edge gateway.
+## Deploy to a gateway with no proxy
- :::image type="content" source="media/connected-cache-overview/disconnected-device-update.png" alt-text="Microsoft Connected Cache Disconnected Device Update" lightbox="media/connected-cache-overview/disconnected-device-update.png":::
+The following diagram describes the scenario where an Azure IoT Edge gateway has direct access to content deliver network (CDN) resources, and has the Microsoft Connected Cache module deployed on it. Behind the gateway, there's an IoT leaf device such as a Raspberry PI that is an internet isolated child device of the IoT Edge gateway.
-1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub (see [Support for Disconnected Devices](connected-cache-disconnected-device-update.md) for details on how to get the module).
-2. Add the environment variables for the deployment. Below is an example of the environment variables.
- **Environment Variables**
-
- | Name | Value |
- | -- | -|
- | CACHE_NODE_ID | See [environment variable](connected-cache-configure.md) descriptions |
- | CUSTOMER_ID | See [environment variable](connected-cache-configure.md) descriptions |
- | CUSTOMER_KEY | See [environment variable](connected-cache-configure.md) descriptions |
- | STORAGE_1_SIZE_GB | 10 |
-
-3. Add the container create options for the deployment. Below is an example of the container create options.
-
-### Container create options
-
-```json
-{
- "HostConfig": {
- "Binds": [
- "/MicrosoftConnectedCache1/:/nginx/cache1/"
- ],
- "PortBindings": {
- "8081/tcp": [
- {
- "HostPort": "80"
- }
- ],
- "5000/tcp": [
- {
- "HostPort": "5100"
- }
- ]
- }
- }
-}
-```
+The following steps are an example of configuring the MCC environment variables to connect directly to the CDN with no proxy:
-For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. (see environment variable details for information on visibility of this report).
+1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub (see [Support for Disconnected Devices](connected-cache-disconnected-device-update.md) for details on how to get the module).
+2. Add the environment variables for the deployment. The following table is an example of the environment variables:
+
+ | Name | Value |
+ | -- | -- |
+ | CACHE_NODE_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | CUSTOMER_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | CUSTOMER_KEY | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | STORAGE_1_SIZE_GB | 10 |
+
+3. Add the container create options for the deployment. For example:
+
+ ```json
+ {
+ "HostConfig": {
+ "Binds": [
+ "/MicrosoftConnectedCache1/:/nginx/cache1/"
+ ],
+ "PortBindings": {
+ "8081/tcp": [
+ {
+ "HostPort": "80"
+ }
+ ],
+ "5000/tcp": [
+ {
+ "HostPort": "5100"
+ }
+ ]
+ }
+ }
+ }
+ ```
+
+For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. For information on the visibility of this report, see [Microsoft Connected Cache summary report](./connected-cache-disconnected-device-update.md#microsoft-connected-cache-summary-report).
```bash
- wget http://<IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com
+wget http://<IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com
```
-## Single level Azure IoT Edge gateway with outbound unauthenticated proxy
+## Deploy to a gateway with outbound unauthenticated proxy
+
+In this scenario, an Azure IoT Edge Gateway has access to content delivery network (CDN) resources through an outbound unauthenticated proxy. Microsoft Connected Cache is configured to cache content from a custom repository and the summary report is visible to anyone on the network.
-In this scenario there is an Azure IoT Edge Gateway that has access to CDN resources through an outbound unauthenticated proxy. Microsoft Connected Cache is being configured to cache content from a custom repository and the summary report has been made visible to anyone on the network. Below is an example of the MCC environment variables that would be set.
- :::image type="content" source="media/connected-cache-overview/single-level-proxy.png" alt-text="Microsoft Connected Cache Single Level Proxy" lightbox="media/connected-cache-overview/single-level-proxy.png":::
+The following steps are an example of configuring the MCC environment variables to support an outbound unauthenticated proxy:
1. Add the Microsoft Connected Cache module to your Azure IoT Edge gateway device deployment in Azure IoT Hub. 2. Add the environment variables for the deployment. Below is an example of the environment variables.
- **Environment Variables**
-
- | Name | Value |
- | -- | -|
- | CACHE_NODE_ID | See [environment variable](connected-cache-configure.md) descriptions |
- | CUSTOMER_ID | See [environment variable](connected-cache-configure.md) descriptions |
- | CUSTOMER_KEY | See [environment variable](connected-cache-configure.md) descriptions |
- | STORAGE_1_SIZE_GB | 10 |
- | CACHEABLE_CUSTOM_1_HOST | Packagerepo.com:80 |
- | CACHEABLE_CUSTOM_1_CANONICAL | Packagerepo.com |
- | IS_SUMMARY_ACCESS_UNRESTRICTED| true |
- | UPSTREAM_PROXY | Your proxy server IP or FQDN |
-
-3. Add the container create options for the deployment. There is no difference in MCC container create options from the previous example. Below is an example of the container create options.
-
-### Container create options
-
-```json
-{
- "HostConfig": {
- "Binds": [
- "/MicrosoftConnectedCache1/:/nginx/cache1/"
- ],
- "PortBindings": {
- "8081/tcp": [
- {
- "HostPort": "80"
- }
- ],
- "5000/tcp": [
- {
- "HostPort": "5100"
- }
- ]
- }
- }
-}
-```
-
-For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the Azure IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. (see environment variable details for information on visibility of this report).
+ | Name | Value |
+ | -- | - |
+ | CACHE_NODE_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | CUSTOMER_ID | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | CUSTOMER_KEY | See [environment variable](connected-cache-disconnected-device-update.md#module-environment-variables) descriptions |
+ | STORAGE_1_SIZE_GB | 10 |
+ | CACHEABLE_CUSTOM_1_HOST | Packagerepo.com:80 |
+ | CACHEABLE_CUSTOM_1_CANONICAL | Packagerepo.com |
+ | IS_SUMMARY_ACCESS_UNRESTRICTED| true |
+ | UPSTREAM_PROXY | Your proxy server IP or FQDN |
+
+3. Add the container create options for the deployment. For example:
+
+ ```json
+ {
+ "HostConfig": {
+ "Binds": [
+ "/MicrosoftConnectedCache1/:/nginx/cache1/"
+ ],
+ "PortBindings": {
+ "8081/tcp": [
+ {
+ "HostPort": "80"
+ }
+ ],
+ "5000/tcp": [
+ {
+ "HostPort": "5100"
+ }
+ ]
+ }
+ }
+ }
+ ```
+
+For a validation of properly functioning Microsoft Connected Cache, execute the following command in the terminal of the Azure IoT Edge device hosting the module or any device on the network. Replace \<Azure IoT Edge Gateway IP\> with the IP address or hostname of your IoT Edge gateway. For information on the visibility of this report, see [Microsoft Connected Cache summary report](./connected-cache-disconnected-device-update.md#microsoft-connected-cache-summary-report).
```bash
- wget http://<Azure IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com
+wget http://<Azure IoT Edge Gateway IP>/mscomtest/wuidt.gif?cacheHostOrigin=au.download.windowsupdate.com
```
iot Iot Device Sdks Lifecycle And Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-device-sdks-lifecycle-and-support.md
+
+ Title: Azure IoT device SDKs lifecycle and support
+description: Describe the lifecycle and support for our IoT Hub and DPS device SDKs
+++++ Last updated : 4/13/2023++
+# Azure IoT Device SDK lifecycle and support
+
+The article describes the Azure IoT Device SDK lifecycle and support policy. For more information, see [Azure SDK Lifecycle and support policy](https://azure.github.io/azure-sdk/policies_support.html).
+
+## Package lifecycle
+
+The releases fall into the following categories, each with a defined support structure.
+1. **Beta** - Also known as Preview or Release Candidate. Available for early access and feedback purposes and **is not recommended** for use in production. The preview version support is limited to GitHub issues. Preview releases typically live for less than six months, after which they're either deprecated or released as active.
+
+1. **Active** - Generally available and fully supported, receives new feature updates, as well as bug and security fixes. We recommend that customers use the **latest version** because that version receives fixes and updates.
+
+1. **Deprecated** - Superseded by a more recent release. Deprecation occurs at the same time the new release becomes active. Deprecated releases address the most critical bug fixes and security fixes for another **12 months**.
+
+## Get support
+
+If you experience problems while using the Azure IoT SDKs, there are several ways to seek support:
+
+* **Reporting bugs** - All customers can report bugs on the issues page for the GitHub repository associated with the relevant SDK.
+
+* **Microsoft Customer Support team** - Users who have a [support plan](https://azure.microsoft.com/support/plans/) can engage the Microsoft Customer Support team by creating a support ticket directly from the [Azure portal](https://portal.azure.com/signin/index/?feature.settingsportalinstance=mpac).
iot Iot Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-introduction.md
The Azure Internet of Things (IoT) is a collection of Microsoft-managed cloud se
The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the key groups of components: devices, IoT cloud services, other cloud services, and solution-wide concerns. Other articles in this section provide more detail on each of these components. ## IoT devices
iot Iot Overview Analyze Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-analyze-visualize.md
+
+ Title: Analyze and visualize your IoT data
+description: An overview of the available options to analyze and visualize data in an IoT solution.
+++++ Last updated : 04/11/2023++
+# As a solution builder, I want a high-level overview of the options for analyzing and visualizing device data in an IoT solution.
++
+# Analyze and visualize your IoT data
+
+This overview introduces the key concepts around the options to analyze and visualize your IoT data. Each section includes links to content that provides further detail and guidance.
+
+The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the areas relevant to analyzing and visualizing your IoT data.
++
+In Azure IoT, analysis and visualization services are used to identify and display business insights derived from your IoT data. For example, you can use a machine learning model to analyze device telemetry and predict when maintenance should be carried out on an industrial asset. You can also use a visualization tool to display a map of the location of your devices.
+
+## Azure Digital Twins
+
+The Azure Digital Twins service lets you build and maintain models that are live, up-to-date representations of the real world. You can query, analyze, and generate visualizations from these models to extract business insights. An example model might be a representation of a building that includes information about the rooms, the devices in the rooms, and the relationships between the rooms and devices. The real-world data that populates these models is typically collected from IoT devices and sent through an IoT hub.
+
+## External services
+
+There are many services you can use to analyze and visualize your IoT data. Some services are designed to work with streaming IoT data, while others are more general-purpose. The following services are some of the most common ones used for analysis and visualization in IoT solutions:
+
+### Azure Data Explorer
+
+[Azure Data Explorer](/azure/data-explorer/data-explorer-overview/) is a fully managed, high-performance, big-data analytics platform that makes it easy to analyze high volumes of data in near real time. The following articles and tutorials show some examples of how to use Azure Data Explorer to analyze and visualize IoT data:
+
+- [IoT Hub data connection (Azure Data Explorer)](/azure/data-explorer/ingest-data-iot-hub-overview)
+- [Explore an Azure IoT Central industrial scenario](../iot-central/core/tutorial-industrial-end-to-end.md)
+- [Export IoT data to Azure Data Explorer (IoT Central)](../iot-central/core/howto-export-to-azure-data-explorer.md)
+- [Azure Digital Twins query plugin for Azure Data Explorer](../digital-twins/concepts-data-explorer-plugin.md)
+
+### Databricks
+
+Use [Azure Databricks](/azure/databricks/introduction/) to process, store, clean, share, analyze, model, and monetize datasets with solutions from BI to machine learning. Use the Azure Databricks platform to build and deploy data engineering workflows, machine learning models, analytics dashboards, and more.
+
+- [Use structured streaming with Azure Event Hubs and Azure Databricks clusters](/azure/databricks/structured-streaming/streaming-event-hubs/). You can connect a Databricks workspace to the Event Hubs-compatible endpoint on an IoT hub to read data from IoT devices.
+- [Extend Azure IoT Central with custom analytics](../iot-central/core/howto-create-custom-analytics.md)
+
+### Azure Stream Analytics
+
+Azure Stream Analytics is a fully managed stream processing engine that is designed to analyze and process large volumes of streaming data with low latency. Patterns and relationships can be identified in data that originates from various input sources including applications, devices, and sensors. You can use these patterns to trigger actions and initiate workflows such as creating alerts or feeding information to a reporting tool. Stream Analytics is also available on the Azure IoT Edge runtime, enabling data processing directly on the edge.
+
+- [Build an IoT solution by using Stream Analytics](../stream-analytics/stream-analytics-build-an-iot-solution-using-stream-analytics.md)
+- [Real-time data visualization of data from Azure IoT Hub](../iot-hub/iot-hub-live-data-visualization-in-power-bi.md)
+- [Extend Azure IoT Central with custom rules and notifications](../iot-central/core/howto-create-custom-rules.md)
+- [Deploy Azure Stream Analytics as an IoT Edge module](../iot-edge/tutorial-deploy-stream-analytics.md)
+
+### Power BI
+
+[Power BI](/power-bi/fundamentals/power-bi-overview) is a collection of software services, apps, and connectors that work together to turn your unrelated sources of data into coherent, visually immersive, and interactive insights. Power BI lets you easily connect to your data sources, visualize and discover what's important, and share that with anyone or everyone you want.
+
+- [Visualize real-time sensor data from Azure IoT Hub using Power BI](../iot-hub/iot-hub-live-data-visualization-in-power-bi.md)
+- [Export data from Azure IoT Central and visualize insights in Power BI](../iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md)
+
+### Azure Maps
+
+[Azure Maps](../azure-maps/about-azure-maps.md) is a collection of geospatial services and SDKs that use fresh mapping data to provide geographic context to web and mobile applications. For an IoT example, see [Integrate with Azure Maps (Azure Digital Twins)](../digital-twins/how-to-integrate-maps.md)
+
+### Grafana
+
+[Grafana](https://grafana.com/) is visualization and analytics software. It allows you to query, visualize, alert on, and explore your metrics, logs, and traces no matter where they're stored. It provides you with tools to turn your time-series database data into insightful graphs and visualizations. [Azure Managed Grafana](https://azure.microsoft.com/products/managed-grafana) is a fully managed service for analytics and monitoring solutions. To learn more about using Grafana in your IoT solution, see [Cloud IoT dashboards using Grafana with Azure IoT](https://sandervandevelde.wordpress.com/2021/06/15/cloud-iot-dashboards-using-grafana-with-azure-iot/).
+
+## IoT Central
+
+IoT Central provides a rich set of features that you can use to analyze and visualize your IoT data. The following articles and tutorials show some examples of how to use IoT Central to analyze and visualize IoT data:
+
+- [How to use IoT Central data explorer to analyze device data](../iot-central/core/howto-create-analytics.md)
+- [Create and manage IoT Central dashboards](../iot-central/core/howto-manage-dashboards.md)
+
+## Next steps
+
+Now that you've seen an overview of the analysis and visualization options available to your IoT solution, some suggested next steps include:
+
+- [Choose the right IoT solution](iot-solution-options.md)
+- [Azure IoT services and technologies](iot-services-and-technologies.md)
iot Iot Overview Device Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-connectivity.md
IoT Central applications use the IoT Hub and the Device Provisioning Service (DP
The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the connectivity between the devices and the IoT cloud services, including gateways and bridges, shown in the diagram. ## Primitives
The open source IoT Central Device Bridge acts as a translator that forwards tel
Now that you've seen an overview of device connectivity in Azure IoT solutions, some suggested next steps include -- [IoT device development](iot-overview-device-development.md) - [Device management and control in IoT solutions](iot-overview-device-management.md)
+- [Process and route messages](iot-overview-message-processing.md)
iot Iot Overview Device Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-development.md
This overview introduces the key concepts around developing devices that connect
The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the devices and gateway shown in the diagram. In Azure IoT, a device developer writes the code to run on the devices in the solution. This code typically:
iot Iot Overview Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-management.md
IoT Central applications use the IoT Hub and the Device Provisioning Service (DP
The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the device management and control components of an IoT solution. In Azure IoT, device management refers to processes such as provisioning and updating devices. Device management includes the following tasks:
To learn more, see:
Now that you've seen an overview of device management and control in Azure IoT solutions, some suggested next steps include -- [IoT device development](iot-overview-device-development.md)-- [Device infrastructure and connectivity](iot-overview-device-connectivity.md)
+- [Process and route messages](iot-overview-message-processing.md)
+- [Extend your IoT solution](iot-overview-solution-extensibility.md)
iot Iot Overview Message Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-message-processing.md
+
+ Title: Process messages from your devices
+description: An overview of message processing options in an Azure IoT solution including routing and enrichments.
+++++ Last updated : 04/03/2023++
+# As a solution builder or device developer I want a high-level overview of the message processing in IoT solutions so that I can easily find relevant content for my scenario.
++
+# Message processing in an IoT solution
+
+This overview introduces the key concepts around processing messages sent from your devices in a typical Azure IoT solution. Each section includes links to content that provides further detail and guidance.
+
+The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the message processing components of an IoT solution.
++
+In Azure IoT, message processing refers to processes such as routing and enriching telemetry messages sent by devices. These processes are used to control the flow of messages through the IoT solution and to add additional information to the messages.
+
+## Route messages
+
+An IoT hub provides a cloud entry point for the telemetry messages that your devices send. In a typical IoT solution, these messages are delivered to other downstream services for storage or analysis.
+
+### IoT Hub routing
+
+In IoT hub, you can configure routing to deliver telemetry messages to the destinations of your choice. Destinations include:
+
+- Storage containers
+- Service Bus queues
+- Service Bus topics
+- Event Hubs
+
+Every IoT hub has a default destination called the *built-in* endpoint. Downstream services can [connect to the built-in endpoint to receive messages](../iot-hub/iot-hub-devguide-messages-read-builtin.md) from the IoT hub.
+
+To learn more, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](../iot-hub/iot-hub-devguide-messages-d2c.md).
+
+You can use [queries to filter the messages](../iot-hub/iot-hub-devguide-routing-query-syntax.md) sent to different destinations.
+
+## IoT Central routing
+
+If you're using IoT Central, you can use data export to send telemetry messages to other downstream services. Destinations include:
+
+- Storage containers
+- Service Bus queues
+- Service Bus topics
+- Event Hubs
+- Azure Data Explorer
+- Webhooks
+
+An IoT Central data export configuration lets you filter the messages sent to a destination.
+
+To learn more, see [Export data from IoT Central](../iot-central/core/howto-export-to-blob-storage.md).
+
+### Event Grid
+
+IoT Hub has built-in integration with [Azure Event Grid](../event-grid/overview.md). An IoT hub can publish an event whenever it receives a telemetry message from a device. You can use Event Grid to route these events to other services.
+
+To learn more, see [React to IoT Hub events by using Event Grid to trigger actions](../iot-hub/iot-hub-event-grid.md) and [Compare message routing and Event Grid for IoT Hub](../iot-hub/iot-hub-event-grid-routing-comparison.md).
+
+## Enrich or transform messages
+
+To simplify downstream processing, you may want to add data to telemetry messages or modify their structure.
+
+### IoT Hub message enrichments
+
+IoT Hub message enrichments let you add data to the messages sent by your devices. You can add:
+
+- A static string
+- The name of the IoT hub processing the message
+- Information from the device twin
+
+To learn more, see [Message enrichments for device-to-cloud IoT Hub messages](../iot-hub/iot-hub-message-enrichments-overview.md).
+
+### IoT Central message transformations
+
+IoT Central has two options for transforming telemetry messages:
+
+- Use [mappings](../iot-central/core/howto-map-data.md) to transform complex device telemetry into structured data on ingress to IoT Central.
+- Use [transformations](../iot-central/core/howto-transform-data-internally.md) to manipulate the format and structure of the device data before it's exported to a destination.
+
+## Process messages at the edge
+
+An Azure IoT Edge module can process telemetry from an attached sensor or device before it's sent to an IoT hub. For example, before it sends data to the cloud an IoT Edge module can:
+
+- [Filter data](../iot-edge/tutorial-deploy-function.md)
+- Aggregate data
+- [Convert data](../iot-central/core/howto-transform-data.md#data-transformation-at-ingress)
+
+## Other cloud services
+
+You can use other Azure services to process telemetry messages from your devices. Both IoT Hub and IoT Central can route messages to other services. For example, you can forward telemetry messages to:
+
+[Azure Stream Analytics](../stream-analytics/stream-analytics-introduction.md) is a managed stream processing engine that is designed to analyze and process large volumes of streaming data. Stream Analytics can identify patterns in your data and then trigger actions such as creating alerts, feeding information to a reporting tool, or storing the transformed data. Stream Analytics is also available on the Azure IoT Edge runtime, enabling it to process data at the edge rather than in the cloud.
+
+[Azure Functions](../azure-functions/functions-overview.md) is a serverless compute service that lets you run code in response to events. You can use Azure Functions to process telemetry messages from your devices.
+
+To learn more, see:
+
+- [Azure IoT Hub bindings for Azure Functions](../azure-functions/functions-bindings-event-iot.md)
+- [Visualize real-time sensor data from Azure IoT Hub using Power BI](../iot-hub/iot-hub-live-data-visualization-in-power-bi.md)
+- [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](../iot-central/core/howto-create-custom-rules.md)
+
+## Next steps
+
+Now that you've seen an overview of device management and control in Azure IoT solutions, some suggested next steps include
+
+- [Extend your IoT solution](iot-overview-solution-extensibility.md)
+- [Analyze and visualize your IoT data](iot-overview-analyze-visualize.md)
iot Iot Overview Solution Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-extensibility.md
+
+ Title: Extend your IoT solution
+description: An overview of the extensibility options in an IoT solution.
+++++ Last updated : 04/03/2023++
+# As a solution builder, I want a high-level overview of the options for extensing an IoT solution so that I can easily find relevant content for my scenario.
++
+# Extend your IoT solution
+
+This overview introduces the key concepts around the options to extend an Azure IoT solution. Each section includes links to content that provides further detail and guidance.
+
+The following diagram shows a high-level view of the components in a typical IoT solution. This article focuses on the areas relevant to extending an IoT solution.
++
+In Azure IoT, solution extensibility refers to the ways you can add to the built-in functionality of the IoT cloud services and build integrations with other services.
+
+## Extensibility scenarios
+
+Extensibility scenarios for IoT solutions include:
+
+### Analysis and visualization
+
+A typical IoT solution includes the analysis and visualization of the data from your devices to enable business insights. To learn more, see [Analyze and visualize your IoT data](iot-overview-analyze-visualize.md).
+
+### Integration with other services
+
+An IoT solution may include other systems such as asset management, work scheduling, and control automation systems. Such systems might:
+
+- Use data from your IoT devices as input to predictive maintenance systems that generate entries in a work scheduling system.
+- Update the device registry to ensure it has up to date data from your asset management system.
+- Send messages to your devices to control their behavior based on rules in a control automation system.
+
+## Azure Data Health Services
+
+[Azure Health Data Services](../healthcare-apis/healthcare-apis-overview.md) is a set of managed API services based on open standards and frameworks that enable workflows to improve healthcare and offer scalable and secure healthcare solutions. An IoT solution can use these services to integrate IoT data into a healthcare solution. To learn more, see [Deploy and review the continuous patient monitoring application template (IoT Central)](../iot-central/healthcare/tutorial-continuous-patient-monitoring.md)
+
+## Industrial IoT (IIoT)
+
+Azure IIoT lets you integrate data from assets and sensors, including those systems that are already operating on your factory floor, into your Azure IoT solution. To learn more, see [Industrial IoT](../industrial-iot/overview-what-is-industrial-iot.md).
+
+## Extensibility mechanisms
+
+The following sections describe the key mechanisms available to extend your IoT solution.
+
+### Service APIs (IoT Hub)
+
+IoT Hub and the Device Provisioning Service (DPS) provide a set of service APIs that you can use to manage and interact with your hub and devices. These APIs include:
+
+- Registry management
+- Interacting with device twins and digital twins
+- Sending cloud-to-device messages and calling commands
+- Managing enrollment groups (DPS)
+- Managing initial device twin state (DPS)
+
+For a list of the available service APIs, see [Service SDKs](iot-sdks.md#service-sdks)
+
+### REST APIs (IoT Central)
+
+The IoT Central REST API provides the following capabilities that are useful for extending your IoT solution:
+
+- Query the devices connected to your application
+- Manage device templates and deployment manifests
+- Manage devices and device groups
+- Control devices by interacting with device properties and calling commands
+
+To learn more, see [IoT Central REST API](../iot-central/core/howto-query-with-rest-api.md).
+
+### Routing and data export
+
+IoT Hub and IoT Central both let you [route device telemetry to different endpoints](iot-overview-message-processing.md#iot-hub-routing). Routing telemetry enables you to build integrations with other services and to export data for analysis and visualization.
+
+In addition to device telemetry, both IoT Hub and IoT Central can send property update and device connection status messages to other endpoints. Routing these messages enables you to build integrations with other services that need device status information:
+
+- [IoT Hub routing](../iot-hub/iot-hub-devguide-messages-d2c.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), and [Cosmos DB](../cosmos-db/introduction.md).
+- [IoT Hub routing](../iot-hub/iot-hub-devguide-messages-d2c.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), and [Cosmos DB](../cosmos-db/introduction.md).
+- [IoT Hub Event Grid integration](../iot-hub/iot-hub-event-grid.md) uses Azure Event Grid to distribute IoT Hub events such as device connectivity, device lifecycle, and telemetry events to other Azure services.
+- [IoT Central rules](../iot-central/core/howto-configure-rules.md) can send device telemetry and property values to webhooks, [Microsoft Power Automate](/power-automate/getting-started/), and [Azure Logic Apps](/azure/logic-apps/logic-apps-overview/).
+- [IoT Central data export](../iot-central/core/howto-export-data.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), [Azure Data Explorer](/azure/data-explorer/data-explorer-overview/), [Azure Event Hubs](../event-hubs/event-hubs-about.md), and webhooks.
+
+### IoT Central application templates
+
+The IoT Central application templates provide a starting point for building IoT solutions that include integrations with other services. You can use the templates to create an application that includes resources that are relevant to your solution. To learn more, see [IoT Central application templates](../iot-central/core/howto-create-iot-central-application.md#create-and-use-a-custom-application-template).
+
+## Next steps
+
+Now that you've seen an overview of the extensibility options available to your IoT solution, some suggested next steps include:
+
+- [Analyze and visualize your IoT data](iot-overview-analyze-visualize.md)
+- [Choose the right IoT solution](iot-solution-options.md)
key-vault Integrate Databricks Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/integrate-databricks-blob-storage.md
Title: Access Azure Blob Storage using Azure Databricks and Azure Key Vault #Required; page title displayed in search results. Include the word "tutorial". Include the brand.
-description: In this tutorial, you'll learn how to access Azure Blob Storage from Azure Databricks using a secret stored in Azure Key Vault #Required; article description that is displayed in search results. Include the word "tutorial".
---
+ Title: Access Azure Blob Storage using Azure Databricks and Azure Key Vault
+description: In this tutorial, you'll learn how to access Azure Blob Storage from Azure Databricks using a secret stored in Azure Key Vault
+++ Last updated 01/20/2023
key-vault Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-terraform.md
+
+ Title: 'Quickstart: Create an Azure key vault and key using Terraform'
+description: 'In this article, you create an Azure key vault and key using Terraform'
+++++++ Last updated : 4/14/2023++
+# Quickstart: Create an Azure key vault and key using Terraform
+
+[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets, such as keys, passwords, and certificate. This article focuses on the process of deploying a Terraform file to create a key vault and a key.
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet)
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create a random value using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string)
+> * Create an Azure key vault using [azurerm_key_vault](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/key_vault)
+> * Create an Azure key vault key using [azurerm_key_vault_key](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/key_vault_key)
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-key-vault-key). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-key-vault-key/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-key-vault-key/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-key-vault-key/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-key-vault-key/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-key-vault-key/outputs.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure key vault name.
+
+ ```console
+ azurerm_key_vault_name=$(terraform output -raw azurerm_key_vault_name)
+ ```
+
+1. Run [az keyvault key list](/cli/azure/keyvault/key#az-keyvault-key-list) to display information about the key vault's keys.
+
+ ```azurecli
+ az keyvault key list --vault-name $azurerm_key_vault_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure key vault name.
+
+ ```console
+ $azurerm_key_vault_name=$(terraform output -raw azurerm_key_vault_name)
+ ```
+
+1. Run [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault) to display information about the new key vault.
+
+ ```azurepowershell
+ Get-AzKeyVaultKey -VaultName $azurerm_key_vault_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Key Vault security overview](../general/security-features.md)
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
Last updated 03/15/2022-+ # Manage inbound NAT rules for Azure Load Balancer using the Azure portal
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
Last updated 08/28/2022-+ # Manage health probes for Azure Load Balancer using the Azure portal
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
# Azure Load Balancer SKUs >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. For guidance on upgrading, visit [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md).
Azure Load Balancer has three SKUs.
load-balancer Tutorial Deploy Cross Region Load Balancer Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-deploy-cross-region-load-balancer-template.md
+
+ Title: Deploy a cross-region load balancer with Azure Resource Manager templates | Microsoft Docs
+description: Deploy a cross-region load balancer with Azure Resource Manager templates
++++ Last updated : 04/12/2023+
+#Customer intent: As a administrator, I want to deploy a cross-region load balancer for global high availability of my application or service.
++
+# Tutorial: Deploy a cross-region load balancer with Azure Resource Manager templates
+
+A cross-region load balancer ensures a service is available globally across multiple Azure regions. If one region fails, the traffic is routed to the next closest healthy regional load balancer.
+
+Using an ARM template takes fewer steps comparing to other deployment methods.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fload-balancer-cross-region%2Fazuredeploy.json)
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * All tutorials include a list summarizing the steps to completion
+> * Each of these bullet points align to a key H2
+> * Use these green checkboxes in a tutorial
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free]
+ (https://azure.microsoft.com/free/?WT.mc_id=A261C142F) and access to the Azure portal.
+
+## Review the template
+In this section, you review the template and the parameters that are used to deploy the cross-region load balancer.
+The template used in this quickstart is from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/load-balancer-cross-region/).
++
+> [!NOTE]
+> When you create a standard load balancer, you must also create a new standard public IP address that is configured as the frontend for the standard load balancer. Also, the Load balancers and public IP SKUs must match. In our case, we will create two standard public IP addresses, one for the regional level load balancer and another for the cross-region load balancer.
+
+Multiple Azure resources have been defined in the template:
+- [**Microsoft.Network/loadBalancers**](/azure/templates/microsoft.network/loadBalancers):Regional and cross-region load balancers.
+
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses): for the load balancer, bastion host, and for each of the virtual machines.
+- [**Microsoft.Network/bastionHosts**](/azure/templates/microsoft.network/bastionhosts)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualNetworks): Virtual network for load balancer and virtual machines.
+
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualMachines) (2): Virtual machines.
+
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkInterfaces) (2): Network interfaces for virtual machines.
+
+- [**Microsoft.Compute/virtualMachine/extensions**](/azure/templates/microsoft.compute/virtualmachines/extensions) (2): use to configure the Internet Information Server (IIS), and the web pages.
+
+To find more templates that are related to Azure Load Balancer, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
+
+## Deploy the template
+
+1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+1. Enter and select **Deploy a custom template** in the search bar
+1. In the **Custom deployment** page, enter **load-balancer-cross-region** in the **Quickstart template** textbox and select **quickstarts/microsoft.network/load-balancer-cross-region**.
+
+ :::image type="content" source="media/tutorial-deploy-cross-region-load-balancer-template/select-quickstart-template.png" alt-text="Screenshot of Custom deployment page for selecting quickstart ARM template.":::
+
+1. Choose **Select template** and enter the following information:
+
+ | Name | Value |
+ | | |
+ | Subscription | Select your subscription |
+ | Resource group | Select your resource group or create a new resource group |
+ | Region | Select the deployment region for resources |
+ | Project Name | Enter a project name used to create unique resource names |
+ | LocationCR | Select the deployment region for the cross-region load balancer |
+ | Location-r1 | Select the deployment region for the regional load balancer and VMs |
+ | Location-r2 | Select the deployment region where the regional load balancer and VMs |
+ | Admin Username | Enter a username for the virtual machines |
+ | Admin Password | Enter a password for the virtual machines |
++
+1. Select **Review + create** to run template validation.
+1. If no errors are present, Review the terms of the template and select **Create**.
+
+## Verify the deployment
+
+1. If necessary, sign in to the [Azure portal](https://portal.azure.com).
+1. Select **Resource groups** from the left pane.
+1. Select the resource group used in the deployment. The default resource group name is the **project name** with **-rg** appended. For example, **crlb-learn-arm-rg**.
+1. Select the cross-region load balancer. Its default name is the project name with **-cr** appended. For example, **crlb-learn-arm-cr**.
+1. Copy only the IP address part of the public IP address, and then paste it into the address bar of your browser. The page resolves to a default IIS Windows Server web page.
+
+ :::image type="content" source="media/tutorial-deploy-cross-region-load-balancer-template/default-web-page.png" alt-text="Screenshot of default IIS Windows Server web page in web browser.":::
+
+## Clean up resources
+
+When you no longer need them, delete the:
+
+* Resource group
+* Load balancer
+* Related resources
+
+1. Go to the Azure portal, select the resource group that contains the load balancer, and then select **Delete resource group**.
+1. Select **apply force delete for selected Virtual machines and Virtual machine scale sets**, enter the name of the resource group, and then select **Delete > Delete **.
+
+## Next steps
+
+In this tutorial, you:
+- Created a cross region load balancer\
+- Created a regional load balancer
+- Created three virtual machines and linked them to the regional load balancer
+- Configured the cross-region load balancer to work with the regional load balancer
+- Tested the cross-region load balancer.
+
+Learn more about cross-region load balancer.
+Advance to the next article to learn how to create...
+> [!div class="nextstepaction"]
+> [Tutorial: Create a load balancer with more than one availability set in the backend pool](tutorial-multi-availability-sets-portal.md)
machine-learning Azure Machine Learning Ci Image Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md
Azure Machine Learning checks and validates any machine learning packages that m
Main updates provided with each image version are described in the below sections.
+## April 7, 2023
+Version: `23.04.07`
+
+Main changes:
+
+- `Azure Machine Learning SDK` to version `1.49.0`
+- `Certifi` updated to `2022.9.24`
+- `.Net` updated from `3.1` (EOL) to `6.0`
+- `Pyspark` update to `3.3.1` (mitigating log4j 1.2.17 and common-text-1.6 vulnerabilities)
+- Default `intellisense` to Python `3.10` on the CI
+- Bug fixes and stability improvements
+
+Main environment specific updates:
+
+- `Azureml_py38` environment is now the default.
+ ## January 19, 2023 Version: `23.01.19`
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Previously updated : 03/15/2022 Last updated : 04/13/2023
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 01/10/2023 Last updated : 04/14/2023 ms.devlang: azurecli monikerRange: 'azureml-api-2 || azureml-api-1'
__Azure Machine Learning compute instance and compute cluster hosts__
| Compute instance | `*.instances.azureml.net` | TCP | 443 | | Compute instance | `*.instances.azureml.ms` | TCP | 443, 8787, 18881 | | Compute instance | `<region>.tundra.azureml.ms` | UDP | 5831 |
-| Compute instance | `*.batch.azure.com` | ANY | 443 |
-| Compute instance | `*.service.batch.com` | ANY | 443 |
+| Compute instance | `*.<region>.batch.azure.com` | ANY | 443 |
+| Compute instance | `*.<region>.service.batch.com` | ANY | 443 |
| Microsoft storage access | `*.blob.core.windows.net` | TCP | 443 | | Microsoft storage access | `*.table.core.windows.net` | TCP | 443 | | Microsoft storage access | `*.queue.core.windows.net` | TCP | 443 |
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
Any library that your scoring script requires to run needs to be indicated in th
__mnist/environment/conda.yml__ Refer to [Create a batch deployment](how-to-use-batch-endpoint.md#create-a-batch-deployment) for more details about how to indicate the environment for your model.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
To create a compute instance, you'll need permissions for the following actions:
* *Microsoft.MachineLearningServices/workspaces/computes/write* * *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
-### Audit and observe compute instance version
+## Audit and observe compute instance version
Once a compute instance is deployed, it does not get automatically updated. Microsoft [releases](azure-machine-learning-ci-image-release-notes.md) new VM images on a monthly basis. To understand options for keeping recent with the latest version, see [vulnerability management](concept-vulnerability-management.md#compute-instance).
-To keep track of whether an instance's operating system version is current, you could query its version using the Studio UI. In your workspace in Azure Machine Learning studio, select Compute, then select compute instance on the top. Select a compute instance's compute name to see its properties including the current operating system. Enable 'audit and observe compute instance os version' under the previews management panel to see these preview properties.
+To keep track of whether an instance's operating system version is current, you could query its version using the CLI, SDK or Studio UI.
-Administrators can use [Azure Policy](policy-reference.md) definitions to audit instances that are running on outdated operating system versions across workspaces and subscriptions. The following is a sample policy:
+# [Studio UI](#tab/azure-studio)
-```json
-{
- "mode": "All",
- "policyRule": {
- "if": {
- "allOf": [
- {
- "field": "type",
- "equals": "Microsoft.MachineLearningServices/workspaces/computes"
- },
- {
- "field": "Microsoft.MachineLearningServices/workspaces/computes/computeType",
- "equals": "ComputeInstance"
- },
- {
- "field": "Microsoft.MachineLearningServices/workspaces/computes/osImageMetadata.isLatestOsImageVersion",
- "equals": "false"
- }
- ]
- },
- "then": {
- "effect": "Audit"
- }
- }
-}
+In your workspace in Azure Machine Learning studio, select Compute, then select compute instance on the top. Select a compute instance's compute name to see its properties including the current operating system.
+
+# [Python SDK](#tab/python)
++
+```python
+from azure.ai.ml.entities import ComputeInstance, AmlCompute
+
+# Display operating system version
+instance = ml_client.compute.get("myci")
+print instance.os_image_metadata
```
+For more information on the classes, methods, and parameters used in this example, see the following reference documents:
+
+* [`AmlCompute` class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute)
+* [`ComputeInstance` class](/python/api/azure-ai-ml/azure.ai.ml.entities.computeinstance)
+
+# [Azure CLI](#tab/azure-cli)
++
+```azurecli
+az ml compute show --name "myci"
+```
++
+IT administrators can use [Azure Policy](./../governance/policy/overview.md) to monitor the inventory of instances across workspaces in Azure Policy compliance portal. Assign the built-in policy [Audit Azure Machine Learning Compute Instances with an outdated operating system](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) on an Azure subscription or Azure management group scope.
+ ## Next steps * [Access the compute instance terminal](how-to-access-terminal.md)
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
To create and work with data assets, you need:
* [Workspace connections created](how-to-connection.md)
-## Importing from external database sources / import from external sources to create a meltable data asset
+## Importing from external database sources / import from external sources to create a mltable data asset
> [!NOTE] > The external databases can have Snowflake, Azure SQL, etc. formats.
ml_client.data.show_materialization_status(name="<name>")
- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job) - [Working with tables in Azure Machine Learning](how-to-mltable.md)-- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
+- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
Azure Machine Learning registries (preview) enable you to create and use those a
[!INCLUDE [CLI v2 preres](../../includes/machine-learning-cli-prereqs.md)] + ## Prepare to create registry You need to decide the following information carefully before proceeding to create a registry:
You can create registries in Azure Machine Learning studio using the following s
> If you are in a workspace, navigate to the global UI by clicking your organization or tenant name in the navigation pane to find the __Registries__ entry. You can also go directly there by navigating to [https://ml.azure.com/registries](https://ml.azure.com/registries). :::image type="content" source="./media/how-to-manage-registries/studio-create-registry-button.png" lightbox="./media/how-to-manage-registries/studio-create-registry-button.png" alt-text="Screenshot of the create registry screen.":::
-
+
1. Enter the registry name, select the subscription and resource group and then select __Next__. :::image type="content" source="./media/how-to-manage-registries/studio-create-registry-basics.png" alt-text="Screenshot of the registry creation basics tab.":::
You can create registries in Azure Machine Learning studio using the following s
1. From the [Azure portal](https://portal.azure.com), navigate to the Azure Machine Learning service. You can get there by searching for __Azure Machine Learning__ in the search bar at the top of the page or going to __All Services__ looking for __Azure Machine Learning__ under the __AI + machine learning__ category. 1. Select __Create__, and then select __Azure Machine Learning registry__. Enter the registry name, select the subscription, resource group and primary region, then select __Next__.
-
+
1. Select the additional regions the registry must support, then select __Next__ until you arrive at the __Review + Create__ tab. :::image type="content" source="./media/how-to-manage-registries/create-registry-review.png" alt-text="Screenshot of the review + create tab.":::
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
Let's create the deployment that will host the model:
__environment/conda.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/environment/torch200-conda.yml" :::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/environment/torch200-conda.yaml" :::
1. We can use the conda file mentioned before as follows:
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
Previously updated : 01/20/2023 Last updated : 04/14/2023 monikerRange: 'azureml-api-2 || azureml-api-1'
__Allow__ outbound traffic to the following __service tags__. Replace `<region>`
__Allow__ outbound traffic over __ANY port 443__ to the following FQDNs. Replace instances of `<region>` with the Azure region that contains your compute cluster or instance:
-* `<region>.batch.azure.com`
-* `<region>.service.batch.com`
+* `*.<region>.batch.azure.com`
+* `*.<region>.service.batch.com`
> [!WARNING] > If you enable the service endpoint on the subnet used by your firewall, you must open outbound traffic to the following hosts over __TCP port 443__:
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
Previously updated : 01/09/2023 Last updated : 04/14/2023 ms.devlang: azurecli
The following configurations are in addition to those listed in the [Prerequisit
| `<region>.tundra.azureml.ms` | UDP | 5831 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. | | `graph.windows.net` | TCP | 443 | Communication with the Microsoft Graph API.| | `*.instances.azureml.ms` | TCP | 443/8787/18881 | Communication with Azure Machine Learning. |
- | `<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
- | `<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
+ | `*.<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
+ | `*.<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
| `*.blob.core.windows.net` | TCP | 443 | Communication with Azure Blob storage. | | `*.queue.core.windows.net` | TCP | 443 | Communication with Azure Queue storage. | | `*.table.core.windows.net` | TCP | 443 | Communication with Azure Table storage. |
The following configurations are in addition to those listed in the [Prerequisit
| `<region>.tundra.azureml.ms` | UDP | 5831 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. | | `graph.windows.net` | TCP | 443 | Communication with the Microsoft Graph API.| | `*.instances.azureml.ms` | TCP | 443/8787/18881 | Communication with Azure Machine Learning. |
- | `<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
- | `<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
+ | `*.<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
+ | `*.<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
| `*.blob.core.windows.net` | TCP | 443 | Communication with Azure Blob storage. | | `*.queue.core.windows.net` | TCP | 443 | Communication with Azure Queue storage. | | `*.table.core.windows.net` | TCP | 443 | Communication with Azure Table storage. |
machine-learning How To Share Data Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-data-across-workspaces-with-registries.md
Before following the steps in this article, make sure you have the following pre
To install the Python SDK v2, use the following command: ```bash
- pip install --pre azure-ai-ml
+ pip install --pre --upgrade azure-ai-ml azure-identity
```
machine-learning How To Share Models Pipelines Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-models-pipelines-across-workspaces-with-registries.md
Before following the steps in this article, make sure you have the following pre
To install the Python SDK v2, use the following command: ```bash
- pip install --pre azure-ai-ml
+ pip install --pre --upgrade azure-ai-ml azure-identity
```
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
A deployment is a set of resources required for hosting the model that does the
__deployment-torch/environment/conda.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/environment/conda.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/environment/conda.yaml":::
> [!IMPORTANT] > The packages `azureml-core` and `azureml-dataset-runtime[fuse]` are required by batch deployments and should be included in the environment dependencies.
In this example, you'll learn how to add a second deployment __that solves the s
__deployment-keras/environment/conda.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/environment/conda.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/environment/conda.yaml":::
1. Create a scoring script for the model:
machine-learning How To Use Pipeline Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-component.md
+
+ Title: How to use pipeline component in pipeline
+
+description: How to use pipeline component to build nested pipeline job in Azure Machine Learning pipeline using CLI v2 and Python SDK
+++++++ Last updated : 04/12/2023+++
+# How to use pipeline component to build nested pipeline job (V2) (preview)
++
+When developing a complex machine learning pipeline, it's common to have sub-pipelines that use multi-step to perform tasks such as data preprocessing and model training. These sub-pipelines can be developed and tested standalone. Pipeline component groups multi-step as a component that can be used as a single step to create complex pipelines. Which will help you share your work and better collaborate with team members.
+
+By using a pipeline component, the author can focus on developing sub-tasks and easily integrate them with the entire pipeline job. Furthermore, a pipeline component has a well-defined interface in terms of inputs and outputs, which means that user of the pipeline component doesn't need to know the implementation details of the component.
+
+In this article, you'll learn how to use pipeline component in Azure Machine Learning pipeline.
++
+## Prerequisites
+
+- Understand how to use Azure Machine Learning pipeline with [CLI v2](how-to-create-component-pipelines-cli.md) and [SDK v2](how-to-create-component-pipeline-python.md).
+- Understand what is [component](concept-component.md) and how to use component in Azure Machine Learning pipeline.
+- Understand what is a [Azure Machine Learning pipeline](concept-ml-pipelines.md)
+
+## The difference between pipeline job and pipeline component
+
+In general, pipeline component is similar to pipeline job. They're both consist of a group of jobs/components.
+
+Here are some main differences you need aware when defining pipeline component:
+
+- Pipeline component only defines the interface of inputs/outputs, which means when defining a pipeline component you need to explicitly define the type of inputs/outputs instead of directly assigning values to them.
+- Pipeline component can't have runtime settings, you can't hard-code compute, or data node in the pipeline component. Instead you need to promote them as pipeline level inputs and assign values during runtime.
+- Pipeline level settings such as default_datastore and default_compute are also runtime settings. They aren't part of pipeline component definition.
+
+### CLI v2
+
+The example used in this article can be found in [azureml-example repo](https://github.com/Azure/azureml-examples). Navigate to *azureml-examples/cli/jobs/pipelines-with-components/pipeline_with_pipeline_component* to check the example.
+
+You can use multi-components to build a pipeline component. Similar to how you built pipeline job with component. This is two step pipeline component.
++
+When reference pipeline component to define child job in a pipeline job, just like reference other type of component. You can provide runtime settings such as default_datastore, default_compute in pipeline job level, any parameter you want to change during run time need promote as pipeline job inputs, otherwise, they'll be hard-code in next pipeline component. We're support to promote compute as pipeline component input to support heterogenous pipeline, which may need different compute target in different steps.
++
+### Python SDK
+
+The python SDK example can be found in [azureml-example repo](https://github.com/Azure/azureml-examples). Navigate to *azureml-examples/sdk/python/jobs/pipelines/1j_pipeline_with_pipeline_component/pipeline_with_train_eval_pipeline_component* to check the example.
+
+You can define a pipeline component using a Python function, which is similar to defining a pipeline job using a function. You can also promote the compute of some step to be used as inputs for the pipeline component.
+
+[!notebook-python[] (~/azureml-examples-main/sdk/python/jobs/pipelines/1j_pipeline_with_pipeline_component/pipeline_with_train_eval_pipeline_component/pipeline_with_train_eval_pipeline_component.ipynb?name=pipeline-component)]
+
+You can use pipeline component as a step like other components in pipeline job.
+
+[!notebook-python[] (~/azureml-examples-main/sdk/python/jobs/pipelines/1j_pipeline_with_pipeline_component/pipeline_with_train_eval_pipeline_component/pipeline_with_train_eval_pipeline_component.ipynb?name=pipeline-component-pipeline-job)]
+
+## Pipeline job with pipeline component in studio
+
+You can use `az ml component create` or `ml_client.components.create_or_update` to register pipeline component as a registered component. After that you can view the component in asset library and component list page.
+
+### Using pipeline component to build pipeline job
+
+After you register the pipeline component, you can drag and drop the pipeline component into the designer canvas and use the UI to build pipeline job.
++
+### View pipeline job using pipeline component
+
+After submitted pipeline job, you can go to pipeline job detail page to change pipeline component status, you can also drill down to child component in pipeline component to debug specific component.
++
+## Sample notebooks
+
+- [nyc_taxi_data_regression_with_pipeline_component](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1j_pipeline_with_pipeline_component/nyc_taxi_data_regression_with_pipeline_component/nyc_taxi_data_regression_with_pipeline_component.ipynb)
+- [pipeline_with_train_eval_pipeline_component](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1j_pipeline_with_pipeline_component/pipeline_with_train_eval_pipeline_component/pipeline_with_train_eval_pipeline_component.ipynb)
+
+## Next steps
+- [YAML reference for pipeline component](reference-yaml-component-pipeline.md)
+- [Track an experiment](how-to-log-view-metrics.md)
+- [Deploy a trained model](how-to-deploy-managed-online-endpoints.md)
machine-learning Reference Yaml Component Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-pipeline.md
+
+ Title: 'CLI (v2) pipeline component YAML schema'
+
+description: Reference documentation for the CLI (v2) pipeline component YAML schema.
+++++++ Last updated : 04/12/2023+++
+# CLI (v2) pipeline component YAML schema (preview)
++
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/pipelineComponent.schema.json.
+++
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
+| `type` | const | The type of component. | `pipeline` | `pipeline` |
+| `name` | string | **Required.** Name of the component. Must start with lowercase letter. Allowed characters are lowercase letters, numbers, and underscore(_). Maximum length is 255 characters.| | |
+| `version` | string | Version of the component. If omitted, Azure Machine Learning will autogenerate a version. | | |
+| `display_name` | string | Display name of the component in the studio UI. It can be non-unique within the workspace. | | |
+| `description` | string | Description of the component. | | |
+| `tags` | object | Dictionary of tags for the component. | | |
+| `jobs` | object | **Required.** Dictionary of the set of individual jobs to run as steps within the pipeline. These jobs are considered child jobs of the parent pipeline job. <br><br> The key is the name of the step within the context of the pipeline job. This name is different from the unique job name of the child job. The value is the job specification, which can follow the [command job schema](reference-yaml-job-command.md#yaml-syntax) or [sweep job schema](reference-yaml-job-sweep.md#yaml-syntax). Currently only command jobs and sweep jobs can be run in a pipeline. | | |
+| `inputs` | object | Dictionary of inputs to the pipeline job. The key is a name for the input within the context of the job and the value is the input value. <br><br> These pipeline inputs can be referenced by the inputs of an individual step job in the pipeline using the `${{ parent.inputs.<input_name> }}` expression. For more information on how to bind the inputs of a pipeline step to the inputs of the top-level pipeline job, see the [Expression syntax for binding inputs and outputs between steps in a pipeline job](reference-yaml-core-syntax.md#binding-inputs-and-outputs-between-steps-in-a-pipeline-job). | | |
+| `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [component input data specification](#component-input). | | |
+| `outputs` | object | Dictionary of output configurations of the pipeline job. The key is a name for the output within the context of the job and the value is the output configuration. <br><br> These pipeline outputs can be referenced by the outputs of an individual step job in the pipeline using the `${{ parents.outputs.<output_name> }}` expression. For more information on how to bind the inputs of a pipeline step to the inputs of the top-level pipeline job, see the [Expression syntax for binding inputs and outputs between steps in a pipeline job](reference-yaml-core-syntax.md#binding-inputs-and-outputs-between-steps-in-a-pipeline-job). | |
+| `outputs.<output_name>` | object | You can leave the object empty, in which case by default the output will be of type `uri_folder` and Azure Machine Learning will system-generate an output location for the output based on the following template path: `{settings.datastore}/azureml/{job-name}/{output-name}/`. File(s) to the output directory will be written via read-write mount. If you want to specify a different mode for the output, provide an object containing the [component output specification](#component-output). | |
+
+### Component input
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | string | **Required.** The type of component input. [Learn more about data access](concept-data.md) | `number`, `integer`, `boolean`, `string`, `uri_file`, `uri_folder`, `mltable`, `mlflow_model`, `custom_model`| |
+| `description` | string | Description of the input. | | |
+| `default` | number, integer, boolean, or string | The default value for the input. | | |
+| `optional` | boolean | Whether the input is required. If set to `true`, you need use the command includes optional inputs with `$[[]]`| | `false` |
+| `min` | integer or number | The minimum accepted value for the input. This field can only be specified if `type` field is `number` or `integer`. | |
+| `max` | integer or number | The maximum accepted value for the input. This field can only be specified if `type` field is `number` or `integer`. | |
+| `enum` | array | The list of allowed values for the input. Only applicable if `type` field is `string`.| |
+
+### Component output
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | string | **Required.** The type of component output. | `uri_file`, `uri_folder`, `mltable`, `mlflow_model`, `custom_model` | |
+| `description` | string | Description of the output. | | |
+
+## Remarks
+
+The `az ml component` commands can be used for managing Azure Machine Learning components.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/lochen/pipeline-component-pup/cli/jobs/pipelines-with-components/pipeline_with_pipeline_component).
+
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
+- [Create ML pipelines using components](how-to-create-component-pipelines-cli.md)
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md
The following configurations are in addition to those listed in the [Prerequisit
| `<region>.tundra.azureml.ms` | UDP | 5831 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. | | `graph.windows.net` | TCP | 443 | Communication with the Microsoft Graph API.| | `*.instances.azureml.ms` | TCP | 443/8787/18881 | Communication with Azure Machine Learning. |
- | `<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
- | `<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
+ | `*.<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
+ | `*.<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
| `*.blob.core.windows.net` | TCP | 443 | Communication with Azure Blob storage. | | `*.queue.core.windows.net` | TCP | 443 | Communication with Azure Queue storage. | | `*.table.core.windows.net` | TCP | 443 | Communication with Azure Table storage. |
The following configurations are in addition to those listed in the [Prerequisit
| `<region>.tundra.azureml.ms` | UDP | 5831 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. | | `graph.windows.net` | TCP | 443 | Communication with the Microsoft Graph API.| | `*.instances.azureml.ms` | TCP | 443/8787/18881 | Communication with Azure Machine Learning. |
- | `<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
- | `<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
+ | `*.<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
+ | `*.<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine Learning workspace. Communication with Azure Batch. |
| `*.blob.core.windows.net` | TCP | 443 | Communication with Azure Blob storage. | | `*.queue.core.windows.net` | TCP | 443 | Communication with Azure Queue storage. | | `*.table.core.windows.net` | TCP | 443 | Communication with Azure Table storage. |
managed-grafana Concept Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/concept-whats-new.md
Title: What's new in Azure Managed Grafana description: Recent updates for Azure Managed Grafana--++ Last updated 02/06/2023
managed-grafana Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/encryption.md
Title: Encryption in Azure Managed Grafana description: Learn how data is encrypted in Azure Managed Grafana.--++ Last updated 03/23/2023
managed-grafana Find Help Open Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/find-help-open-support-ticket.md
Title: Find help or open a support ticket for Azure Managed Grafana description: Learn how to find help or open a support ticket for Azure Managed Grafana-+ Last updated 01/23/2023-+ # Find help or open a support ticket for Azure Managed Grafana
managed-grafana Grafana App Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/grafana-app-ui.md
Title: Grafana UI description: Learn about the Grafana UI components--panels, visualizations and dashboards.--++ Last updated 3/23/2022
managed-grafana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/high-availability.md
Title: Azure Managed Grafana service reliability description: Learn about service reliability and availability options provided by Azure Managed Grafana--++ Last updated 3/23/2023
managed-grafana How To Api Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-api-calls.md
Title: 'Call Grafana APIs programmatically with Azure Managed Grafana' description: Learn how to call Grafana APIs programmatically with Azure Active Directory and an Azure service principal--++ Last updated 04/05/2023
managed-grafana How To Authentication Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-authentication-permissions.md
Title: How to set up authentication and permissions in Azure Managed Grafana
description: Learn how to set up Azure Managed Grafana authentication permissions using a system-assigned Managed identity or a Service Principal --++ Last updated 12/13/2022
managed-grafana How To Create Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-api-keys.md
Title: Create and manage Grafana API keys in Azure Managed Grafana description: Learn how to generate and manage Grafana API keys, and start making API calls for Azure Managed Grafana.--++
managed-grafana How To Create Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-dashboard.md
Title: Create a Grafana dashboard with Azure Managed Grafana description: Learn how to create and configure Azure Managed Grafana dashboards.--++ Last updated 03/07/2023
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Title: How to configure data sources for Azure Managed Grafana description: In this how-to guide, discover how you can configure data sources for Azure Managed Grafana using Managed Identity.--++ Last updated 1/12/2023
managed-grafana How To Deterministic Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-deterministic-ip.md
Title: How to set up and use deterministic outbound APIs in Azure Managed Grafan
description: Learn how to set up and use deterministic outbound APIs in Azure Managed Grafana --++ Last updated 03/23/2022
managed-grafana How To Enable Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-enable-zone-redundancy.md
Title: How to enable zone redundancy in Azure Managed Grafana
description: Learn how to create a zone-redundant Managed Grafana instance. --++ Last updated 02/28/2023
managed-grafana How To Grafana Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-grafana-enterprise.md
Title: Subscribe to Grafana Enterprise description: Activate Grafana Enterprise (preview) to access Grafana Enterprise plugins within Azure Managed Grafana--++ Last updated 01/09/2023
managed-grafana How To Monitor Managed Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-monitor-managed-grafana-workspace.md
Title: 'How to monitor your Azure Managed Grafana instance with logs' description: Learn how to monitor your Azure Managed Grafana instance with logs.--++
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
Title: How to modify access permissions to Azure Monitor description: Learn how to manually set up permissions that allow your Azure Managed Grafana instance to access a data source--++
managed-grafana How To Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-service-accounts.md
Title: How to use service accounts in Azure Managed Grafana description: In this guide, learn how to use service accounts in Azure Managed Grafana.--++
managed-grafana How To Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-set-up-private-access.md
Title: How to set up private access (preview) in Azure Managed Grafana description: How to disable public access to your Azure Managed Grafana instance and configure private endpoints.--++ Last updated 02/16/2023
managed-grafana How To Share Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-dashboard.md
Title: Share an Azure Managed Grafana dashboard or panel description: Learn how to share a Grafana dashboard with internal and external stakeholders, such as customers or partners.--++ Last updated 03/01/2023
managed-grafana How To Share Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md
Title: How to share an Azure Managed Grafana instance description: 'Learn how you can share access permissions to Azure Grafana Managed.' --++
managed-grafana How To Smtp Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-smtp-settings.md
Title: 'How to configure SMTP settings (preview) within Azure Managed Grafana' description: Learn how to configure SMTP settings (preview) to generate email notifications for Azure Managed Grafana--++ Last updated 02/01/2023
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
description: Learn about current limitations in Azure Managed Grafana.
Last updated 03/13/2023-+ -+ # Limitations of Azure Managed Grafana
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
Title: What is Azure Managed Grafana? description: Read an overview of Azure Managed Grafana. Understand why and how to use Managed Grafana. --++ Last updated 3/23/2023
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
Title: 'Quickstart: create an Azure Managed Grafana instance using the Azure CLI
description: Learn how to create a Managed Grafana instance using the Azure CLI --++ Last updated 12/13/2022 ms.devlang: azurecli
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
Title: 'Quickstart: create an Azure Managed Grafana instance using the Azure por
description: Learn how to create a Managed Grafana workspace to generate a new Managed Grafana instance in the Azure portal --++ Last updated 03/23/2022
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
Title: 'Troubleshoot Azure Managed Grafana' description: Troubleshoot Azure Managed Grafana issues related to fetching data, managing Managed Grafana dashboards, speed and more.--++ Last updated 09/13/2022
managed-instance-apache-cassandra Best Practice Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/best-practice-performance.md
+
+ Title: Best practices for optimal performance in Azure Managed Instance for Apache Cassandra
+description: Learn about best practices to ensure optimal performance from Azure Managed Instance for Apache Cassandra
+++ Last updated : 04/05/2023+
+keywords: azure performance cassandra
++
+# Best practices for optimal performance
+
+Azure Managed Instance for Apache Cassandra provides automated deployment and scaling operations for managed open-source Apache Cassandra datacenters. This article provides tips on how to optimize performance.
++
+## Optimal setup and configuration
+
+### Replication factor, number of disks, number of nodes, and SKUs
+
+Because Azure supports *three* availability zones in most regions, and Cassandra Managed Instance maps availability zones to racks, we recommend choosing a partition key with high cardinality to avoid hot partitions. For the best level of reliability and fault tolerance, we highly recommend configuring a replication factor of 3. We also recommend specifying a multiple of the replication factor as the number of nodes, for example 3, 6, 9, etc.
++
+We use a RAID 0 over the number of disks you provision. So to get the optimal IOPS you need to check for the maximum IOPS on the SKU you have chosen together with the IOPS of a P30 disk. For example, the `Standard_DS14_v2` SKU supports 51,200 uncached IOPS, whereas a single P30 disk has a base performance of 5,000 IOPS. So, four disks would lead to 20,000 IOPS, which is well below the limits of the machine.
+
+We strongly recommend extensive benchmarking of your workload against the SKU and number of disks. Benchmarking is especially important in the case of SKUs with only eight cores. Our research shows that eight core CPUs only work for the least demanding workloads, and most workloads need a minimum of 16 cores to be performant.
++
+## Analytical vs. Transactional workloads
+
+Transactional workloads typically need a data center optimized for low latency, while analytical workloads often use more complex queries, which take longer to execute. In most cases you would want separate data centers:
+
+* One optimized for low latency
+* One optimized for analytical workloads
++
+### Optimizing for analytical workloads
+
+We recommend customers apply the following `cassandra.yaml` settings for analytical workloads (see [here](create-cluster-portal.md#update-cassandra-configuration) on how to apply)
+++
+#### Timeouts
++
+|Value                             |Cassandra MI Default    |Recommendation for analytical workload |
+|-|||
+|read_request_timeout_in_ms        |    5,000               |   10,000  |
+|range_request_timeout_in_ms       |10,000                  |20,000 |
+|counter_write_request_timeout_in_ms |  5,000               | 10,000 |
+|cas_contention_timeout_in_ms      |1,000                   |2,000 |
+|truncate_request_timeout_in_ms    |60,000                  |120,000|
+|slow_query_log_timeout_in_ms      |500                     |1,000 |
+|roles_validity_in_ms              |2,000                   |120,000 |
+|permissions_validity_in_ms        |2,000                   |120,000 |
++
+#### Caches
+
+|Value                             |Cassandra MI Default    |Recommendation for analytical workload |
+|-|||
+| file_cache_size_in_mb            | 2,048                  |6,144 |
++
+#### More recommendations
+
+|Value                             |Cassandra MI Default    |Recommendation for analytical workload |
+|-|||
+|commitlog_total_space_in_mb       |8,192                   |16,384 |
+|column_index_size_in_kb           |64                      |16 |
+|compaction_throughput_mb_per_sec  |128                     |256 |
++
+#### Client settings
+
+We recommend boosting Cassandra client driver timeouts in accordance with the timeouts applied on the server.
++
+### Optimizing for low latency
+
+Our default settings are already suitable for low latency workloads. To ensure best performance for tail latencies we highly recommend using a client driver that supports [speculative execution](https://docs.datastax.com/en/developer/java-driver/4.10/manual/core/speculative_execution/) and configuring your client accordingly. For Java V4 driver, you can find a demo illustrating how this works and how to enable the policy [here](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-speculative-execution).
++++
+## Monitoring for performance bottle necks
++
+### CPU performance
+
+Like every database system, Cassandra works best if the CPU utilization is around 50% and never gets above 80%. You can view CPU metrics in the Metrics tab within Monitoring from the portal:
+
+ :::image type="content" source="./media/best-practice-performance/metrics.png" alt-text="Screenshot of CPU metrics." lightbox="./media/best-practice-performance/metrics.png" border="true":::
++
+If the CPU is permanently above 80% for most nodes the database will become overloaded manifesting in multiple client timeouts. In this scenario, we recommend taking the following actions:
+
+* vertically scale up to a SKU with more CPU cores (especially if the cores are only 8 or less).
+* horizontally scale by adding more nodes (as mentioned earlier, the number of nodes should be multiple of the replication factor).
++
+If the CPU is only high for a few nodes, but low for the others, it indicates a hot partition and needs further investigation.
++
+> [!NOTE]
+> Currently changing SKU is only supported via ARM template deployment. You can deploy/edit ARM template, and replace SKU with one of the following.
+>
+> - Standard_E8s_v4
+> - Standard_E16s_v4
+> - Standard_E20s_v4
+> - Standard_E32s_v4
+> - Standard_DS13_v2
+> - Standard_DS14_v2
+> - Standard_D8s_v4
+> - Standard_D16s_v4
+> - Standard_D32s_v4
+++
+### Disk performance
+
+The service runs on Azure P30 managed disks, which allow for "burst IOPS". Careful monitoring is required when it comes to disk related performance bottlenecks. In this case it's important to review the IOPS metrics:
+
+ :::image type="content" source="./media/best-practice-performance/metrics-disk.png" alt-text="Screenshot of disk I/O metrics." lightbox="./media/best-practice-performance/metrics-disk.png" border="true":::
+
+If metrics show one or all of the following characteristics, it can indicate that you need to scale up.
+
+- Consistently higher than or equal to the base IOPS (remember to multiply 5,000 IOPS by the number of disks per node to get the number).
+- Consistently higher than or equal to the maximum IOPS allowed for the SKU for writes.
+- Your SKU supports cached storage (write-through-cache) and this number is smaller than the IOPS from the managed disks (this will be the upper limit for your read IOPS).
+
+If you only see the IOPS elevated for a few nodes, you might have a hot partition and need to review your data for a potential skew.
+
+If your IOPS are lower than what is supported by the chosen SKU, but higher or equal to the disk IOPS, you can take the following actions:
+
+* Add more disks to increase performance. Increasing disks requires a support case to be raised.
+* [Scale up the data center(s)](create-cluster-portal.md#scale-a-datacenter) by adding more nodes.
++
+If your IOPS max out what your SKU supports, you can:
+
+* scale up to a different SKU supporting more IOPS.
+* [Scale up the data center(s)](create-cluster-portal.md#scale-a-datacenter) by adding more nodes.
++
+For more information refer to [Virtual Machine and disk performance](../virtual-machines/disks-performance.md).
+
+### Network performance
+
+In most cases network performance is sufficient. However, if you are frequently streaming data (such as frequent horizontal scale-up/scale down) or there are huge ingress/egress data movements, this can become a problem. You may need to evaluate the network performance of your SKU. For example, the `Standard_DS14_v2` SKU supports 12,000 Mb/s, compare this to the byte-in/out in the metrics:
++
+ :::image type="content" source="./media/best-practice-performance/metrics-network.png" alt-text="Screenshot of network metrics." lightbox="./media/best-practice-performance/metrics-network.png" border="true":::
++
+If you only see the network elevated for a small number of nodes, you might have a hot partition and need to review your data distribution and/or access patterns for a potential skew.
+
+* Vertically scale up to a different SKU supporting more network I/O.
+* Horizontally scale up the cluster by adding more nodes.
+++
+### Too many connected clients
+
+Deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold increases overall latency. Monitor the number of connected clients to ensure this does not exceed tolerable limits.
+
+ :::image type="content" source="./media/best-practice-performance/metrics-connections.png" alt-text="Screenshot of connected client metrics." lightbox="./media/best-practice-performance/metrics-connections.png" border="true":::
++
+### Disk space
+
+In most cases, there is sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disk and then reduces it when compaction is triggered. Hence it is important to review disk usage over longer periods to establish trends - like compaction unable to recoup space.
+
+> [!NOTE]
+> In order to ensure available space for compaction, disk utilization should be kept to around 50%.
+
+If you only see this behavior for a few nodes, you might have a hot partition and need to review your data distribution and/or access patterns for a potential skew.
++
+* add more disks but be mindful of IOPS limits imposed by your SKU
+* horizontally scale up the cluster
+++
+### JVM memory
+
+Our default formula assigns half the VM's memory to the JVM with an upper limit of 31 GB - which in most cases is a good balance between performance and memory. Some workloads, especially ones which have frequent cross-partition reads or range scans might be memory challenged.
+
+In most cases memory gets reclaimed effectively by the Java garbage collector, but especially if the CPU is often above 80% there aren't enough CPU cycles for the garbage collector left. So any CPU performance problems should be addresses before memory problems.
+
+If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you are on a SKU with limited memory. In most cases, you will need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query.
+
+If you indeed need more memory, you can:
+
+* File a ticket for us to increase the JVM memory settings for you
+* Scale vertically to a SKU that has more memory available
++
+### Tombstones
+
+We run repairs every seven days with reaper which removes rows whose TTL has expired (called "tombstone"). Some workloads have more frequent deletes and see warnings like `Read 96 live rows and 5035 tombstone cells for query SELECT ...; token <token> (see tombstone_warn_threshold)` in the Cassandra logs, or even errors indicating that a query couldn't be fulfilled due to excessive tombstones.
+
+A short term mitigation if queries don't get fulfilled is to increase the `tombstone_failure_threshold` in the [Cassandra config](create-cluster-portal.md#update-cassandra-configuration) from the default 100,000 to a higher value.
++
+In addition to this, we recommend reviewing the TTL on the keyspace and potentially run repairs daily to clear out more tombstones. If the TTLs are short, for example less than two days, and data flows in and gets deleted quickly, we recommend reviewing the [compaction strategy](https://cassandra.apache.org/doc/4.1/cassandra/operating/compaction/https://docsupdatetracker.net/index.html#types-of-compaction) and favoring `Leveled Compaction Strategy`. In some cases, such actions may be an indication that a review of the data model is required.
+
+### Batch warnings
+
+You might encounter this warning in the [CassandraLogs](monitor-clusters.md#create-setting-portal) and potentially related failures:
+
+`Batch for [<table>] is of size 6.740KiB, exceeding specified threshold of 5.000KiB by 1.740KiB.`
+
+In this case you should review your queries to stay below the recommended batch size. In rare cases and as a short term mitigation you can increase `batch_size_fail_threshold_in_kb` in the [Cassandra config](create-cluster-portal.md#update-cassandra-configuration) from the default of 50 to a higher value.  
++++
+## Large partition warning
+
+You might encounter this warning in the [CassandraLogs](monitor-clusters.md#create-setting-portal):
+
+`Writing large partition <table> (105.426MiB) to sstable <file>`
+
+This indicates a problem in the data model. Here is a [stack overflow article](https://stackoverflow.com/questions/74024443/how-do-i-analyse-and-solve-writing-large-partition-warnings-in-cassandra) that goes into more detail. This can cause severe performance issues and needs to be addressed.
+
+## Next steps
+
+In this article, we laid out some best practices for optimal performance. You can now start working with the cluster:
+
+> [!div class="nextstepaction"]
+> [Create a cluster using Azure Portal](create-cluster-portal.md)
notification-hubs Create Notification Hub Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-terraform.md
+
+ Title: 'Quickstart: Create an Azure notification hub using Terraform'
+description: In this article, you create an Azure notification hub using Terraform
++++++ Last updated : 4/14/2023++
+# Quickstart: Create an Azure notification hub using Terraform
+
+This article uses Terraform to create an Azure Notification Hubs namespace and a notification hub. The name of each resource is randomly generated to avoid naming conflicts.
+
+Azure Notification Hubs provides an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, Kindle, etc.) from any backend (cloud or on-premises). For more information about the service, see [What is Azure Notification Hubs](notification-hubs-push-notification-overview.md).
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Create a random value for the Azure Notification Hub namespace name using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string).
+> * Create an Azure Notification Hub namespace using [azurerm_notification_hub_namespace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/notification_hub_namespace).
+> * Create a random value for the Azure Notification Hub name using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string).
+> * Create an Azure Notification Hub using [azurerm_notification_hub](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/notification_hub).
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-notification-hub). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-notification-hub\TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-notification-hub/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-notification-hub/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-notification-hub/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-notification-hub/outputs.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the namespace name.
+
+ ```console
+ notification_hub_namespace_name=$(terraform output -raw notification_hub_namespace_name)
+ ```
+
+1. Run [az notification-hub list](/cli/azure/notification-hub#az-notification-hub-list) to display the hubs for the specified namespace.
+
+ ```azurecli
+ az notification-hub list \
+ --resource-group $resource_group_name \
+ --namespace-name $notification_hub_namespace_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the namespace name.
+
+ ```console
+ $notification_hub_namespace_name=$(terraform output -raw notification_hub_namespace_name)
+ ```
+
+1. Run [Get-AzNotificationHub](/powershell/module/az.notificationhubs/get-aznotificationhub) to display the hubs for the specified namespace.
+
+ ```azurepowershell
+ Get-AzNotificationHub -ResourceGroup $resource_group_name `
+ -Namespace $notification_hub_namespace_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Set up push notifications in Azure Notification Hubs](configure-notification-hub-portal-pns-settings.md)
operator-nexus Concepts Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-observability.md
Title: "Azure Operator Nexus: observability using Azure Monitor" description: Operator Nexus uses Azure Monitor and collects and aggregates data in Azure Log Analytics Workspace (LAW). The analysis, visualization, and alerting is performed on this collected data.---- Previously updated : 03/06/2023 #Required; mm/dd/yyyy format.-++++ Last updated : 03/06/2023+ # Azure Operator Nexus observability
operator-nexus Concepts Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-resource-types.md
Title: Azure Operator Nexus resource types description: Operator Nexus platform and tenant resource types--++ - Previously updated : 03/06/2023 #Required; mm/dd/yyyy format.-+ Last updated : 03/06/2023+ # Azure Operator Nexus resource types
operator-nexus Howto Azure Operator Nexus Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-azure-operator-nexus-prerequisites.md
Title: "Azure Operator Nexus: Before you start Network Fabric Controller and Clu
description: Prepare for create the Azure Operator Nexus Network Fabric Controller and Cluster Manger. -- Previously updated : 03/03/2023 #Required; mm/dd/yyyy format.-++ Last updated : 03/03/2023+ # Operator Nexus Azure resources prerequisites
operator-nexus Howto Configure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md
Title: "Azure Operator Nexus: How to configure the Cluster deployment"
description: Learn the steps for deploying the Operator Nexus Cluster. -- Previously updated : 03/03/2023 #Required; mm/dd/yyyy format.++ Last updated : 03/03/2023
operator-nexus Howto Configure Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md
Title: "Azure Operator Nexus: How to configure the L2 and L3 isolation-domains in Operator Nexus instances" description: Learn to create, view, list, update, delete commands for Layer 2 and Layer isolation-domains in Operator Nexus instances---- Previously updated : 04/02/2023 #Required; mm/dd/yyyy format.-++++ Last updated : 04/02/2023+ # Configure L2 and L3 isolation-domains using managed network fabric services
operator-nexus Howto Configure Network Fabric Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric-controller.md
Title: "Azure Operator Nexus: How to configure Network fabric Controller" description: How to configure Network fabric Controller---- Previously updated : 02/06/2023 #Required; mm/dd/yyyy format.++++ Last updated : 02/06/2023 # Create and modify a Network Fabric Controller using Azure CLI
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
Title: "Azure Operator Nexus: How to configure the Network Fabric" description: Learn to create, view, list, update, delete commands for Network Fabric---- Previously updated : 03/26/2023 #Required; mm/dd/yyyy format.++++ Last updated : 03/26/2023
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
Title: "Azure Operator Nexus: Install CLI extensions" description: Learn to install the needed Azure CLI extensions for Operator Nexus---+++ Last updated 03/06/2023
-#
+#
# Prepare to install Azure CLI extensions
operator-nexus Howto Monitor Aks H Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-aks-h-cluster.md
Title: "Azure Operator Nexus: Monitoring of AKS-Hybrid cluster" description: How-to guide for setting up monitoring of AKS-Hybrid cluster on Operator Nexus.---- Previously updated : 01/26/2023 #Required; mm/dd/yyyy format.++++ Last updated : 01/26/2023
operator-nexus Howto Monitor Virtualized Network Functions Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-virtualized-network-functions-virtual-machines.md
Title: "Azure Operator Nexus: Monitoring of Virtualized Network Function Virtual Machines" description: How-to guide for setting up monitoring of Virtualized Network Function Virtual Machines on Operator Nexus.---- Previously updated : 02/01/2023 #Required; mm/dd/yyyy format.-++++ Last updated : 02/01/2023+ # Monitoring virtual machines (for virtualized network function)
operator-nexus Howto Platform Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-platform-prerequisites.md
Title: "Azure Operator Nexus: Before you start platform deployment pre-requisites" description: Learn the prerequisite steps for deploying the Operator Nexus platform software.---- Previously updated : 03/13/2023 #Required; mm/dd/yyyy format.-++++ Last updated : 03/13/2023+ # Operator Nexus platform prerequisites
operator-nexus List Of Metrics Collected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/list-of-metrics-collected.md
Title: List of Metrics Collected in Azure Operator Nexus. description: List of metrics collected in Azure Operator Nexus.---- Previously updated : 02/03/2023 #Required; mm/dd/yyyy format.-++++ Last updated : 02/03/2023+ # List of metrics collected in Azure Operator Nexus
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
Azure Database for PostgreSQL - Flexible Server supports advanced [Data Recovery
* As support for Geo-redundant backup with data encryption using CMK is currently in preview, there is currently no Azure CLI support for server creation with both of these features enabled. * If [Read replica database](../flexible-server/concepts-read-replicas.md) is setup to be encrypted with CMK during creation, its encryption key needs to be resident in an Azure Key Vault (AKV) in the region where Read replica database resides. [User assigned identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to authenticate against this Azure Key Vault (AKV) needs to be created in the same region. -
-> [!NOTE]
-> CLI examples below are based on 2.45.0 version of Azure Database for PostgreSQL - Flexible Server CLI libraries
-
-## Setup Customer Managed Key during Server Creation
-
-### Portal
-
-Prerequisites:
--- Azure Active Directory (Azure AD) user managed identity in region where Postgres Flex Server will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity.--- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key. Follow [requirements section above](#requirements-for-configuring-data-encryption-for-azure-database-for-postgresql-flexible-server) for required Azure Key Vault settings-
-Follow the steps below to enable CMK while creating Postgres Flexible Server using Azure portal.
-
-1. Navigate to Azure Database for PostgreSQL - Flexible Server create pane via Azure portal
-
-1. Provide required information on Basics and Networking tabs
-
-1. Navigate to Security tab. On the screen, provide Azure Active Directory (Azure AD) identity that has access to the Key Vault and Key in Key Vault in the same region where you're creating this server
-
-1. On Review Summary tab, make sure that you provided correct information in Security section and press Create button
-
-1. Once it's finished, you should be able to navigate to Data Encryption screen for the server and update identity or key if necessary
--
-### CLI:
-
-The Azure command-line interface (Azure CLI) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
-
-Prerequisites:
--- You must have an Azure subscription and be an administrator on that subscription.-
-Follow the steps below to enable CMK while creating Postgres Flexible Server using Azure CLI.
-
-1. Create a key vault and a key to use for a customer-managed key. Also enable purge protection and soft delete on the key vault.
-
-```azurecli-interactive
- az keyvault create -g <resource_group> -n <vault_name> --location <azure_region> --enable-purge-protection true
-```
-
-2. In the created Azure Key Vault, create the key that will be used for the data encryption of the Azure Database for PostgreSQL - Flexible server.
-
-```azurecli-interactive
- keyIdentifier=$(az keyvault key create --name <key_name> -p software --vault-name <vault_name> --query key.kid -o tsv)
-```
-3. Create Managed Identity which will be used to retrieve key from Azure Key Vault
-```azurecli-interactive
- identityPrincipalId=$(az identity create -g <resource_group> --name <identity_name> --location <azure_region> --query principalId -o tsv)
-```
-
-4. Add access policy with key permissions of *wrapKey*, *unwrapKey*, *get*, *list* in Azure KeyVault to the managed identity we created above
-```azurecli-interactive
-az keyvault set-policy -g <resource_group> -n <vault_name> --object-id $identityPrincipalId --key-permissions wrapKey unwrapKey get list
-```
-5. Finally, lets create Azure Database for PostgreSQL - Flexible Server with CMK based encryption enabled
-```azurecli-interactive
-az postgres flexible-server create -g <resource_group> -n <postgres_server_name> --location <azure_region> --key $keyIdentifier --identity <identity_name>
-```
--
-### Azure Resource Manager (ARM)
-ARM templates are a form of infrastructure as code, a concept where you define the infrastructure you need to be deployed.
-Using ARM templates in managing your Azure environment has many benefits, as declarative syntax removes the requirement of writing complicated deployment scripts to handle multiple deployment scenarios. For more on ARM templates see this [doc](../../azure-resource-manager/templates/overview.md)
-
-Prerequisites:
-- You must have an Azure subscription and be an administrator on that subscription.-- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key. -
-Following is an example Azure ARM template that creates server with Customer Managed Key (CMK) based encryption as defined in *dataEncryptionData* section of ARM template
-```json
-{
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters":
- {
- "administratorLogin":
- {
- "type": "string"
- },
- "administratorLoginPassword":
- {
- "type": "securestring"
- },
- "target":
- {
- "type": "string"
- },
- "name":
- {
- "type": "string"
- },
- "serverEdition":
- {
- "type": "string",
- "defaultValue": "GeneralPurpose"
- },
- "storageSizeGB":
- {
- "type": "int",
- "defaultValue": 128
- },
- "haEnabled":
- {
- "type": "string",
- "defaultValue": "Disabled"
- },
- "availabilityZone":
- {
- "type": "string",
- "defaultValue": "1"
- },
- "standbyAvailabilityZone":
- {
- "type": "string",
- "defaultValue": "2"
- },
- "version":
- {
- "type": "string"
- },
- "tags":
- {
- "type": "object",
- "defaultValue":
- {}
- },
- "firewallRules":
- {
- "type": "object",
- "defaultValue":
- {}
- },
- "backupRetentionDays":
- {
- "type": "int"
- },
- "geoRedundantBackup":
- {
- "type": "string"
- },
- "vmName":
- {
- "type": "string",
- "defaultValue": "Standard_D4s_v3"
- },
- "vnetData":
- {
- "type": "object",
- "metadata":
- {
- "description": "Vnet data is an object which contains all parameters pertaining to vnet and subnet"
- },
- "defaultValue":
- {
- "virtualNetworkName": "",
- "subnetName": "",
- "privateDnsZoneName": "",
- "Network":
- {}
- }
- },
- "userAssignedIdentitity":
- {
- "type": "string",
- "defaultValue": "postgresflexi"
- },
- "managedIdentityType":
- {
- "type": "string",
- "defaultValue": "UserAssigned"
- },
- "identityData":
- {
- "type": "object",
- "defaultValue":
- {}
- },
- "dataEncryptionData":
- {
- "type": "object",
- "defaultValue":
- {}
- },
- "apiVersion":
- {
- "type": "string",
- "defaultValue": "2021-06-01"
- },
- "aadEnabled":
- {
- "type": "bool",
- "defaultValue": false
- },
- "aadData":
- {
- "type": "object",
- "defaultValue":
- {}
- },
- "authConfig":
- {
- "type": "object",
- "defaultValue":
- {}
- },
- "azSubscriptionId":
- {
- "type": "string",
- "metadata":
- {
- "description": "User subscription id"
- }
- },
- "resource_group":
- {
- "type": "string"
- },
- "cmkKeyvault":
- {
- "type": "string"
- },
- "cmkUri":
- {
- "type": "string"
- },
- "dataEncryptionType":
- {
- "type": "string"
- }
- },
- "variables":
- {
- "firewallRules": "[parameters('firewallRules').rules]",
- "identityData":
- {
- "type": "UserAssigned",
- "UserAssignedIdentities":
- {
- "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentitity'))]":
- {}
- }
- },
- "Network":
- {
- "DelegatedSubnetResourceId": "[concat('/subscriptions/', parameters('azSubscriptionId'), '/resourceGroups/', parameters('resource_group'), '/providers/Microsoft.Network/virtualNetworks/', parameters('vnetData').virtualNetworkName, '/subnets/', parameters('vnetData').subnetName)]",
- "PrivateDnsZoneResourceId": "[concat('/subscriptions/', parameters('azSubscriptionId'), '/resourceGroups/', parameters('resource_group'), '/providers/Microsoft.Network/privateDnsZones/', parameters('vnetData').privateDnsZoneName)]",
- "PrivateDnsZoneArmResourceId": "[concat('/subscriptions/', parameters('azSubscriptionId'), '/resourceGroups/', parameters('resource_group'), '/providers/Microsoft.Network/privateDnsZones/', parameters('vnetData').privateDnsZoneName)]"
- },
- "dataEncryptionData":
- {
- "type": "[parameters('dataEncryptionType')]",
- "primaryUserAssignedIdentityId": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentitity'))]",
- "primaryKeyUri": "[parameters('cmkUri')]"
- }
- },
- "resources":
- [
- {
- "apiVersion": "[parameters('apiVersion')]",
- "location": "[parameters('target')]",
- "name": "[parameters('name')]",
- "identity": "[if(empty(variables('identityData')), json('null'), variables('identityData'))]",
- "properties":
- {
- "version": "[parameters('version')]",
- "administratorLogin": "[parameters('administratorLogin')]",
- "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
- "Network": "[variables('Network')]",
- "availabilityZone": "[parameters('availabilityZone')]",
- "Storage":
- {
- "StorageSizeGB": "[parameters('storageSizeGB')]"
- },
- "Backup":
- {
- "backupRetentionDays": "[parameters('backupRetentionDays')]",
- "geoRedundantBackup": "[parameters('geoRedundantBackup')]"
- },
- "highAvailability":
- {
- "mode": "[parameters('haEnabled')]",
- "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
- },
- "dataencryption": "[if(empty(variables('dataEncryptionData')), json('null'), variables('dataEncryptionData'))]",
- "authConfig": "[if(empty(parameters('authConfig')), json('null'), parameters('authConfig'))]"
- },
- "sku":
- {
- "name": "[parameters('vmName')]",
- "tier": "[parameters('serverEdition')]"
- },
- "tags": "[parameters('tags')]",
- "type": "Microsoft.DBforPostgreSQL/flexibleServers"
- },
- {
- "condition": "[parameters('aadEnabled')]",
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2018-05-01",
- "name": "addAdmins",
- "dependsOn":
- [
- "[concat('Microsoft.DBforPostgreSQL/flexibleServers/', parameters('name'))]"
- ],
- "properties":
- {
- "mode": "Incremental",
- "template":
- {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources":
- [
- {
- "type": "Microsoft.DBforPostgreSQL/flexibleServers/administrators",
- "name": "[concat(parameters('name'),'/', parameters('aadData').adminSid)]",
- "apiVersion": "[parameters('apiVersion')]",
- "properties":
- {
- "tenantId": "[parameters('aadData').tenantId]",
- "principalName": "[parameters('aadData').principalName]",
- "principalType": "[parameters('aadData').principalType]"
- }
- }
- ]
- }
- }
- },
- {
- "condition": "[greater(length(variables('firewallRules')), 0)]",
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-08-01",
- "name": "[concat('firewallRules-', copyIndex())]",
- "copy":
- {
- "count": "[if(greater(length(variables('firewallRules')), 0), length(variables('firewallRules')), 1)]",
- "mode": "Serial",
- "name": "firewallRulesIterator"
- },
- "dependsOn":
- [
- "[concat('Microsoft.DBforPostgreSQL/flexibleServers/', parameters('name'))]",
- "Microsoft.Resources/deployments/addAdmins"
- ],
- "properties":
- {
- "mode": "Incremental",
- "template":
- {
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources":
- [
- {
- "type": "Microsoft.DBforPostgreSQL/flexibleServers/firewallRules",
- "name": "[concat(parameters('name'),'/',variables('firewallRules')[copyIndex()].name)]",
- "apiVersion": "[parameters('apiVersion')]",
- "properties":
- {
- "StartIpAddress": "[variables('firewallRules')[copyIndex()].startIPAddress]",
- "EndIpAddress": "[variables('firewallRules')[copyIndex()].endIPAddress]"
- }
- }
- ]
- }
- }
- },
- {
- "type": "Microsoft.DBforPostgreSQL/flexibleServers/configurations",
- "apiVersion": "2022-12-01",
- "name": "[concat(parameters('name'), '/shared_preload_libraries')]",
- "dependsOn":
- [
- "[resourceId('Microsoft.DBforPostgreSQL/flexibleServers', parameters('name'))]"
- ],
- "properties":
- {
- "value": "pgaudit",
- "source": "user-override"
- }
- },
- {
- "type": "Microsoft.DBforPostgreSQL/flexibleServers/configurations",
- "apiVersion": "2022-12-01",
- "name": "[concat(parameters('name'), '/pgaudit.log')]",
- "dependsOn":
- [
- "[resourceId('Microsoft.DBforPostgreSQL/flexibleServers', parameters('name'))]"
- ],
- "properties":
- {
- "value": "all",
- "source": "user-override"
- }
- }
- ]
-}
-```
-## Update Customer Managed Key on the CMK enabled Flexible Server
-
-### Portal
-
-Prerequisites:
--- Azure Active Directory (Azure AD) user-managed identity in region where Postgres Flex Server will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity.--- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.-
-Follow the steps below to update CMK on CMK enabled Flexible Server using Azure portal:
-
-1. Navigate to Azure Database for PostgreSQL - Flexible Server create a page via the Azure portal.
-
-1. Navigate to Data Encryption screen under Security tab
-
-1. Select different identity to connect to Azure Key Vault, remembering that this identity needs to have proper access rights to the Key Vault
-
-1. Select different key by choosing subscription, Key Vault and key from dropdowns provided.
--
-### CLI
-
-The Azure command-line interface (Azure CLI) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
--
-Prerequisites:
-- You must have an Azure subscription and be an administrator on that subscription.-- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key. -
-Follow the steps below to change\rotate key or identity after creation of server with data encryption.
-1. Change key/identity for data encryption for existing server, first lets get new key identifier
-```azurecli-interactive
- newKeyIdentifier=$(az keyvault key show --vault-name <vault_name> --name <key_name> --query key.kid -o tsv)
-```
-2. Update server with new key and\or identity
-```azurecli-interactive
- az postgres flexible-server update --resource-group <resource_group> --name <server_name> --key $newKeyIdentifier --identity <identity_name>
-```
----- ## Limitations The following are current limitations for configuring the customer-managed key in Flexible Server:
postgresql Concepts Troubleshooting Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-troubleshooting-guides.md
+
+ Title: Troubleshooting Guides for Azure Database for PostgreSQL - Flexible Server Preview
+description: Troubleshooting guides for Azure Database for PostgreSQL - Flexible Server.
+++++ Last updated : 03/21/2023++
+# Troubleshooting guides for Azure Database for PostgreSQL - Flexible Server Preview
++
+> [!NOTE]
+> Troubleshooting guides for PostgreSQL Flexible Server are currently in preview.
+
+The Troubleshooting Guides for Azure Database for PostgreSQL - Flexible Server are designed to help you quickly identify and resolve common challenges you may encounter while using Azure Database for PostgreSQL. Integrated directly into the Azure portal, the Troubleshooting Guides provide actionable insights, recommendations, and data visualizations to assist you in diagnosing and addressing issues related to common performance problems. With these guides at your disposal, you'll be better equipped to optimize your PostgreSQL experience on Azure and ensure a smoother, more efficient database operation.
+
+## Overview
+
+The troubleshooting guides available in Azure Database for PostgreSQL - Flexible Server provide you with the necessary tools to analyze and troubleshoot prevalent performance issues,
+including:
+* High CPU Usage,
+* High Memory Usage,
+* High IOPS Usage,
+* High Temporary Files,
+* Autovacuum Monitoring,
+* Autovacuum Blockers.
++
+Each guide is packed with multiple charts, guidelines, recommendations tailored to the specific problem you may encounter, which can help expedite the troubleshooting process.
+The troubleshooting guides are directly integrated into the Azure portal and your Azure Database for PostgreSQL - Flexible Server, making them convenient and easy to use.
+
+The troubleshooting guides consist of the following components:
+
+- **High CPU Usage**
+
+ * CPU Utilization
+ * Workload Details
+ * Transaction Trends and Counts
+ * Long Running Transactions
+ * Top CPU Consuming queries
+ * Total User Only Connections
+
+- **High Memory Usage**
+
+ * Memory Utilization
+ * Workload Details
+ * Long Running Sessions
+ * Top Queries by Data Usage
+ * Total User only Connections
+ * Guidelines for configuring parameters
+
+- **High IOPS Usage**
+
+ * IOPS Usage
+ * Workload Details
+ * Session Details
+ * Top Queries by IOPS
+ * IO Wait Events
+ * Checkpoint Details
+ * Storage Usage
+
+- **High Temporary Files**
+
+ * Storage Utilization
+ * Temporary Files Generated
+ * Workload Details
+ * Top Queries by Temporary Files
+
+- **Autovacuum Monitoring**
+
+ * Bloat Ratio
+ * Tuple Counts
+ * Tables Vacuumed & Analyzed Execution Counts
+ * Autovacuum Workers Execution Counts
+
+- **Autovacuum Blockers**
+
+ * Emergency AV and Wraparound
+ * Autovacuum Blockers
++
+Before using any troubleshooting guide, it is essential to ensure that all prerequisites are in place. For a detailed list of prerequisites, please refer to the [Use Troubleshooting Guides](how-to-troubleshooting-guides.md) article.
+
+### Limitations
+
+* Troubleshooting Guides are not available for [read replicas](concepts-read-replicas.md).
+* Please be aware that enabling Query Store on the Burstable pricing tier can lead to a negative impact on performance. As a result, it is generally not recommended to use Query Store with this particular pricing tier.
++
+## Next steps
+
+* Learn more about [How to use Troubleshooting Guides](how-to-troubleshooting-guides.md).
+* Learn more about [Troubleshoot high CPU utilization](how-to-high-cpu-utilization.md).
+* Learn more about [High memory utilization](how-to-high-memory-utilization.md).
+* Learn more about [Troubleshoot high IOPS utilization](how-to-high-io-utilization.md).
+* Learn more about [Autovacuum Tuning](how-to-autovacuum-tuning.md).
+
+[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
postgresql How To Create Server Customer Managed Key Azure Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-server-customer-managed-key-azure-api.md
+
+ Title: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using Azure REST API
+description: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using Azure REST API
+++++ Last updated : 04/13/2023+
+# Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys (CMK) using Azure REST API
++
+In this article, you learn how to create Azure Database for PostgreSQL with data encrypted by Customer Managed Keys (CMK) by using the Azure REST API. For more information on encryption with Customer Managed Keys (CMK), see [overview](../flexible-server/concepts-data-encryption.md).
+
+## Setup Customer Managed Key during Server Creation
+
+Prerequisites:
+- You must have an Azure subscription and be an administrator on that subscription.
+- Azure managed identity in region where Postgres Flex Server will be created.
+- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
++
+> [!NOTE]
+> API examples below are based on 2022-12-01 API version
+
+You can create a PostgreSQL Flexible Server encrypted with Customer Managed Key by using the [create API](https://learn.microsoft.com/rest/api/postgresql/flexibleserver/servers/create?tabs=HTTP):
+```rest
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{serverName}?api-version=2022-12-01
+
+```
+```json
+{
+ "location": "eastus",
+ "identity": {
+ "type": "UserAssigned",
+ "UserAssignedIdentities": {
+ "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{userIdentity}": {}
+ }
+ },
+ "properties": {
+ "CreateMode": "Create",
+ "administratorLogin": "admin",
+ "AdministratorLoginPassword": "p@ssw0rd",
+ "version": "14",
+ "dataencryption": {
+ "type": "AzureKeyVault",
+ "primaryUserAssignedIdentityId": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{userIdentity}",
+ "primaryKeyUri": {keyVaultUri}
+ }
+ }
+}
+```
+Key Vault Uri can be copied from key properties **Key Identifier** field in Azure Key Vault Portal UI, as shown in image below:
+You can also programmatically fetch Key Vault Uri using [Azure REST API](https://learn.microsoft.com/rest/api/keyvault/keyvault/vaults/get?tabs=HTTP)
+
+## Next steps
+
+- [Flexible Server encryption with Customer Managed Key (CMK)](../flexible-server/concepts-data-encryption.md)
+- [Azure Active Directory](../../active-directory-domain-services/overview.md)
postgresql How To High Io Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-io-utilization.md
Last updated 08/16/2022-+ # Troubleshoot high IOPS utilization for Azure Database for PostgreSQL - Flexible Server
postgresql How To Pgdump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-pgdump-restore.md
Last updated 09/16/2022-+ # Best practices for pg_dump and pg_restore for Azure Database for PostgreSQL - Flexible Server
postgresql How To Troubleshooting Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshooting-guides.md
+
+ Title: Troubleshooting guides - Azure portal - Azure Database for PostgreSQL - Flexible Server Preview
+description: Learn how to use Troubleshooting guides for Azure Database for PostgreSQL - Flexible Server from the Azure portal.
+++++ Last updated : 03/21/2023++
+# Use the Troubleshooting guides for Azure Database for PostgreSQL - Flexible Server Preview
++
+> [!NOTE]
+> Troubleshooting guides for PostgreSQL Flexible Server are currently in preview.
+
+In this article, you'll learn how to use Troubleshooting guides for Azure Database for PostgreSQL from the Azure portal. To learn more about Troubleshooting guides, see the [overview](concepts-troubleshooting-guides.md).
+
+## Prerequisites
+
+To effectively troubleshoot specific issue, you need to make sure you have all the necessary data in place.
+Each troubleshooting guide requires a specific set of data, which is sourced from three separate features: [Diagnostic settings](howto-configure-and-access-logs.md), [Query Store](concepts-query-store.md), and [Enhanced Metrics](concepts-monitoring.md#enabling-enhanced-metrics).
+All troubleshooting guides require logs to be sent to the Log Analytics workspace, but the specific category of logs to be captured may vary depending on the particular guide.
+
+Please follow the steps described in the [Configure and Access Logs in Azure Database for PostgreSQL - Flexible Server](howto-configure-and-access-logs.md) article to configure diagnostic settings and send the logs to the Log Analytics workspace.
+Query Store, and Enhanced Metrics are configured via the Server Parameters. Please follow the steps described in the "Configure server parameters in Azure Database for PostgreSQL - Flexible Server" articles for [Azure portal](howto-configure-server-parameters-using-portal.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
+
+The table below provides information on the required log categories for each troubleshooting guide, as well as the necessary Query Store, Enhanced Metrics and Server Parameters prerequisites.
+
+| Troubleshooting guide | Diagnostic settings log categories | Query Store | Enhanced Metrics | Server Parameters |
+|:-|:--|-|-|-|
+| Autovacuum Blockers | PostgreSQL Sessions, PostgreSQL Database Remaining Transactions | N/A | N/A | N/A |
+| Autovacuum Monitoring | PostgreSQL Server Logs, PostgreSQL Tables Statistics, PostgreSQL Database Remaining Transactions | N/A | N/A | log_autovacuum_min_duration |
+| High CPU Usage | PostgreSQL Server Logs, PostgreSQL Sessions, AllMetrics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
+| High IOPS Usage | PostgreSQL Query Store Runtime, PostgreSQL Server Logs, PostgreSQL Sessions, PostgreSQL Query Store Wait Statistics | pgms_wait_sampling.query_capture_mode to ALL | metrics.collector_database_activity | N/A |
+| High Memory Usage | PostgreSQL Server Logs, PostgreSQL Sessions, PostgreSQL Query Store Runtime | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
+| High Temporary Files | PostgreSQL Sessions, PostgreSQL Query Store Runtime, PostgreSQL Query Store Wait Statistics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
++
+> [!NOTE]
+> Please note that if you have recently enabled diagnostic settings, query store, enhanced metrics or server parameters, it may take some time for the data to be populated. Additionally, if there has been no activity on the database within a certain time frame, the charts might appear empty. In such cases, try changing the time range to capture relevant data. Be patient and allow the system to collect and display the necessary data before proceeding with your troubleshooting efforts.
+
+## Using Troubleshooting guides
+
+To use troubleshooting guides, follow these steps:
+
+1. Open the Azure portal and find a Postgres instance that you want to examine.
+
+2. From the left-side menu, open Help > Troubleshooting guides.
+
+3. Navigate to the top of the page where you will find a series of tabs, each representing one of the six problems you may wish to resolve. Click on the relevant tab.
+
+ :::image type="content" source="./media/how-to-troubleshooting-guides/portal-blade-overview.png" alt-text="Screenshot of Troubleshooting guides - tabular view.":::
+
+4. Select the time range during which the problem occurred.
+
+ :::image type="content" source="./media/how-to-troubleshooting-guides/time-range.png" alt-text="Screenshot of time range picker.":::
+
+5. Follow the step-by-step instructions provided by the guide. Pay close attention to the charts and data visualizations plotted within the troubleshooting steps, as they can help you identify any inaccuracies or anomalies. Use this information to effectively diagnose and resolve the problem at hand.
+
+### Retrieving the Query Text
+
+Due to privacy considerations, certain information such as query text and usernames may not be displayed within the Azure portal.
+To retrieve the query text, you will need to log in to your Azure Database for PostgreSQL - Flexible Server instance.
+Access the `azure_sys` database using the PostgreSQL client of your choice, where query store data is stored.
+Once connected, query the `query_store.query_texts_view view` to retrieve the desired query text.
+
+In the example shown below, we utilize Azure Cloud Shell and the `psql` tool to accomplish this task:
++
+### Retrieving the Username
+
+For privacy reasons, the Azure portal displays the role ID from the PostgreSQL metadata (pg_catalog) rather than the actual username.
+To retrieve the username, you can query the `pg_roles` view or use the query shown below in your PostgreSQL client of choice, such as Azure Cloud Shell and the `psql` tool:
+
+```sql
+SELECT 'UserID'::regrole;
+```
+++
+## Next steps
+
+* Learn more about [Troubleshoot high CPU utilization](how-to-high-cpu-utilization.md).
+* Learn more about [High memory utilization](how-to-high-memory-utilization.md).
+* Learn more about [Troubleshoot high IOPS utilization](how-to-high-io-utilization.md).
+* Learn more about [Autovacuum Tuning](how-to-autovacuum-tuning.md).
+
+[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview.md
These capabilities require almost no administration, and all are provided at no
## Deployment models
-Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in three deployment modes:
+Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in two deployment modes:
- Single Server - Flexible Server
private-5g-core Gather Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md
# Gather diagnostics using the Azure portal > [!IMPORTANT]
-> Diagnostics packages may contain *personally identifiable information (PII)*. During this procedure, when providing the diagnostics package's *shared access signature (SAS)* URL to Azure support, you are explicitly giving Azure support permission to access the diagnostics package and any PII that it contains.
+> Diagnostics packages may contain information from your site which may, depending on use, include data such as personal data, customer data, and system-generated logs. During this procedure, when providing the diagnostics package's *shared access signature* (SAS) URL to Azure support, you are explicitly giving Azure support permission to access the diagnostics package and any information that it contains. You should confirm that this is acceptable under your company's privacy policies and agreements.
In this how-to guide, you'll learn how to gather a remote diagnostics package for an Azure Private 5G Core (AP5GC) site using the Azure portal. The diagnostics package can be provided, as a shared access signature (SAS) URL, to AP5GC support to assist you with issues.
+You should always collect diagnostics as soon as possible after encountering an issue and submit them with any support request. [How to open a support request for Azure Private 5G Core](open-support-request.md).
+ ## Prerequisites You must already have an AP5GC site deployed to collect diagnostics.
You must already have an AP5GC site deployed to collect diagnostics.
1. Select **Diagnostics collection**. 1. AP5GC online service will generate a package and upload it to the provided storage account URL. Once AP5GC reports that the upload has succeeded, report the SAS URL to Azure support. 1. Generate a SAS URL by selecting **Generate SAS** on the blob details blade.
- 1. Copy the contents of the **Blob SAS URL** field and share the URL with your support representative via a [support request ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request).
- > [!IMPORTANT]
- > You must always set **Service type** to **Azure Private 5G Core** when raising a support request for any issues related to AP5GC.
+ 1. Copy the contents of the **Blob SAS URL** field and share the URL with your support representative. See [How to open a support request for Azure Private 5G Core](open-support-request.md).
1. Azure support will access the diagnostics using the provided SAS URL and provide support based on the information. ## Troubleshooting
You must already have an AP5GC site deployed to collect diagnostics.
- If an invalid container URL was passed, the request will be rejected and report **400 Bad Request**. Repeat the process with the correct container URL. - If the asynchronous part of the operation fails, the asynchronous operation resource is set to **Failed** and reports a failure reason. - Additionally, check that the same user-assigned identity was added to both the site and storage account.-- If this does not resolve the issue, share the correlation ID of the failed request with AP5GC support for investigation via a [support request ticket](/azure/azure-portal/supportability/how-to-create-azure-support-request).
- > [!IMPORTANT]
- > You must always set **Service type** to **Azure Private 5G Core** when raising a support request for any issues related to AP5GC.
+- If this does not resolve the issue, share the correlation ID of the failed request with AP5GC support for investigation. See [How to open a support request for Azure Private 5G Core](open-support-request.md).
## Next steps
private-5g-core Open Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/open-support-request.md
+
+ Title: How to open a support request
+
+description: This article guides you through how to submit a support request if you have a problem with your AP5GC service.
++++ Last updated : 03/31/2023+++
+# Get support for your Azure Private 5G Core service
+
+If you need help or notice problems with Azure Private 5G Core (AP5GC), you can raise a support request (also known as a support ticket). This article describes how to raise support requests for Azure Private 5G Core.
+
+> [!IMPORTANT]
+> You must always set **Service type** to **Azure Private 5G Core** when raising a support request for any issues related to AP5GC, even if the issue involves another Azure service. Selecting the wrong service type will cause your request to be delayed.
+
+Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan, such as Microsoft Unified Support or Premier Support.
+
+For general information on raising support requests, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+## Prerequisites
+
+You must have an **Owner**, **Contributor**, or **Support Request Contributor** role in your Azure Private 5G Core subscription, or a custom role with [Microsoft.Support/*](../role-based-access-control/resource-provider-operations.md#microsoftsupport) at the subscription level.
+
+## 1. Generate a support request in the Azure portal
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+1. Select the question mark icon in the top menu bar.
+1. Select the **Help + support** button.
+1. Select **Create a support request**.
+
+## 2. Enter a description of the problem or the change
+
+1. Concisely describe your problem or the change you need in the **Summary** box.
+1. Select an **Issue type** from the drop-down menu.
+1. Select your **Subscription** from the drop-down menu. Choose the subscription where you're noticing the problem or need a change. The support engineer assigned to your case will only be able to access resources in the subscription you specify. If the issue applies to multiple subscriptions, you can mention other subscriptions in your description, or by sending a message later. However, the support engineer will only be able to work on subscriptions to which you have access.
+
+ > [!NOTE]
+ > The remaining steps will vary depending on which options you select. For example, you won't be prompted to select a resource for a billing enquiry.
+
+1. A new **Service** option will appear giving you the option to select either **My services** or **All services**. Select **My services**.
+1. In **Service type** select **Azure Private 5G Core** from the drop-down menu.
+1. A new **Resource** option will appear. Select the resource you need help with, or select **General question**.
+1. A new **Problem type** option will appear. Select the problem type that most accurately describes your issue from the drop-down menu.
+1. A new **Problem subtype** option will appear. Select the problem subtype that most accurately describes your issue from the drop-down menu.
+1. Select **Next**.
+
+## 3. Assess the recommended solutions
+
+Based on the information you provided, we might show you recommended solutions you can use to try to resolve the problem. In some cases, we might even run a quick diagnostic. Solutions are written by Azure engineers and will solve most common problems.
+
+If you're still unable to resolve the issue, continue creating your support request by selecting **Return to support request** and then **Next**.
+
+## 4. Enter additional details
+
+In this section, we collect more details about the problem or the change and how to contact you. Providing thorough and detailed information in this step helps us route your support request to the right engineer.
+
+You should always collect diagnostics as soon as possible after encountering an issue and submit them with your support request using the **File upload** option. See [Gather diagnostics using the Azure portal](/azure/private-5g-core/gather-diagnostics).
+
+## 5. Review and create your support request
+
+Before creating your request, review the details and diagnostics that you'll send to support. If you want to change your request or the files you've uploaded, select **Previous** to return to any tab. When you're happy with your request, select **Create**.
+
+## Next steps
+
+Learn how to [Manage an Azure support request](../azure-portal/supportability/how-to-manage-azure-support-request.md).
private-5g-core Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/security.md
For more information on how to generate a Key Vault certificate, see [Certificat
## Personally identifiable information
-[Diagnostics packages](gather-diagnostics.md) may contain *personally identifiable information (PII)*. When providing diagnostics package to Azure support, you are explicitly giving Azure support permission to access the diagnostics package and any PII that it contains.
+[Diagnostics packages](gather-diagnostics.md) may contain information from your site which may, depending on use, include data such as personal data, customer data, and system-generated logs. During this procedure, when providing the diagnostics package's *shared access signature* (SAS) URL to Azure support, you are explicitly giving Azure support permission to access the diagnostics package and any information that it contains. You should confirm that this is acceptable under your company's privacy policies and agreements.
### Access authentication
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
description: In this article, learn about which Azure services support Private L
-+ Last updated 10/28/2022
private-link Tutorial Private Endpoint Webapp Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-webapp-portal.md
Last updated 06/22/2022-+ # Tutorial: Connect to a web app using an Azure Private Endpoint
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
For more information to create and complete a scan, see [the manage data sources
In Microsoft Purview Data Estate Insights, you can get an overview of the assets that have been scanned into the Data Map and view key gaps that can be closed by governance stakeholders, for better governance of the data estate.
-1. Navigate to your Microsoft Purview account in the Azure portal.
+1. Open the Microsoft Purview governance portal by:
-1. On the **Overview** page, in the **Get Started** section, select the **Open Microsoft Purview governance portal** tile.
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
:::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Screenshot of Microsoft Purview account in Azure portal with the Microsoft Purview governance portal button highlighted.":::
purview Catalog Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-firewall.md
To configure Microsoft Purview firewall follow these steps:
- Public network access is set to _All networks_ on your Microsoft Purview account's Managed Event Hubs, if it's used. > [!NOTE]
- > Even though the network access is enabled through public internet, to gain access to Microsoft Purview governance portal, users must be first authenticated and authorized.
+ > Even though the network access is enabled through public internet, to gain access to Microsoft Purview governance portal, [users must be first authenticated and authorized](catalog-permissions.md).
- **Disabled for ingestion only (Preview)**
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-lineage-user-guide.md
Lineage in Microsoft Purview includes datasets and processes. Datasets are also
To access lineage information for an asset in Microsoft Purview, follow the steps:
-1. In the Azure portal, go to the [Microsoft Purview accounts page](https://aka.ms/purviewportal).
+1. Open the Microsoft Purview governance portal by:
-1. Select your Microsoft Purview account from the list, and then select **Open Microsoft Purview governance portal** from the **Overview** page.
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. On the Microsoft Purview governance portal **Home** page, search for a dataset name or the process name such as ADF Copy or Data Flow activity. And then press Enter.
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Before deploying a Managed VNet and Managed VNet Runtime for a Microsoft Purview
> [!NOTE] > The following guide shows how to register and scan an Azure Data Lake Storage Gen 2 using Managed VNet Runtime.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-azure-portal.png" alt-text="Screenshot that shows the Microsoft Purview account":::
-2. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Integration runtimes**.
+2. Navigate to the **Data Map --> Integration runtimes**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet.png" alt-text="Screenshot that shows Microsoft Purview Data Map menus":::
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
The Microsoft Purview governance portal uses **Collections** in the Microsoft Pu
> [!IMPORTANT] > This article refers to permissions required for the Microsoft Purview governance portal, and applications like the Microsoft Purview Data Map, Data Catalog, Data Estate Insights, etc. If you are looking for permissions information for the Microsoft Purview compliance center, follow [the article for permissions in the Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center-permissions).
+## Permissions to access the Microsoft Purview governance portal
+
+There are two main ways to access the Microsoft Purview governance portal, and you'll need specific permissions for either:
+
+- To access your Microsoft Purview governance portal directly at [https://web.purview.azure.com](https://web.purview.azure.com), you'll need at least a [reader role](#roles) on a collection in your Microsoft Purview Data Map.
+- To access your Microsoft Purview governance portal through the [Azure portal](https://portal.azure.com) by searching for your Microsoft Purview account, opening it, and selecting **Open Microsoft Purview governance portal**, you'll need at least a **Reader** role under **Access Control (IAM)**.
+
+> [!NOTE]
+> If you created your account using a service principal, to be able to access the Microsoft Purview governance portal you will need to [grant a user collection admin permissions on the root collection](#administrator-change).
+ ## Collections A collection is a tool that the Microsoft Purview Data Map uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All accesses to the Microsoft Purview governance portal's resources are managed from collections in the Microsoft Purview Data Map.
You can assign roles to users, security groups, and service principals from your
After creating a Microsoft Purview (formerly Azure Purview) account, the first thing to do is create collections and assign users to roles within those collections. > [!NOTE]
-> If you created your account using a service principal, to be able to access the Microsoft Purview governance portal and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
-> You can use [this Azure CLI command](/cli/azure/purview/account#az-purview-account-add-root-collection-admin):
->
-> ```azurecli
-> az purview account add-root-collection-admin --account-name [Microsoft Purview Account Name] --resource-group [Resource Group Name] --object-id [User Object Id]
-> ```
->
-> The object-id is optional. For more information and an example, see the [CLI command reference page](/cli/azure/purview/account#az-purview-account-add-root-collection-admin).
+> If you created your account using a service principal, to be able to access the Microsoft Purview governance portal and assign permissions to users, you will need to [grant a user collection admin permissions on the root collection](#administrator-change).
### Create collections
For full instructions, see our [how-to guide for adding role assignments](how-to
## Administrator change
-There may be a time when your [root collection admin](#roles) needs to change. By default, the user who creates the account is automatically assigned collection admin to the root collection. To update the root collection admin, there are four options:
+There may be a time when your [root collection admin](#roles) needs to change, or an admin needs to be added after an account is created by an application. By default, the user who creates the account is automatically assigned collection admin to the root collection. To update the root collection admin, there are four options:
- You can manage root collection administrators in the [Azure portal](https://portal.azure.com/): 1. Sign in to the Azure portal and search for your Microsoft Purview account.
purview Concept Metamodel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-metamodel.md
-+ Last updated 11/10/2022-+ # Microsoft Purview metamodel
purview Concept Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-workflow.md
-+ Last updated 10/17/2022
purview Create Microsoft Purview Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-portal.md
For more information about the governance capabilities of Microsoft Purview, for
After your account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open the Microsoft Purview governance portal:
-* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page.
+- You can browse directly to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account name, and sign in to your workspace.
+- Alternatively, open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the **Open Microsoft Purview governance portal** tile on the overview page.
:::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
-* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account name, and sign in to your workspace.
- ## Next steps In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account, and how to access it.
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-and-manage-collections.md
Collections in the Microsoft Purview Data Map can be used to organize assets and
### Check permissions
-In order to create and manage collections in the Microsoft Purview Data Map, you'll need to be a **Collection Admin** within the Microsoft Purview governance portal. We can check these permissions in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). You can find Studio in the overview page of the account in the [Azure portal](https://portal.azure.com).
+In order to create and manage collections in the Microsoft Purview Data Map, you'll need to be a **Collection Admin** within the Microsoft Purview governance portal. We can check these permissions in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). You can find the Microsoft Purview governance portal by:
+
+- Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+- Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select Data Map > Collections from the left pane to open collection management page.
purview How To Metamodel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-metamodel.md
-+ Last updated 01/26/2023-+ # Manage assets with metamodel
purview How To Monitor Data Map Population https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-data-map-population.md
In Microsoft Purview, you can scan various types of data sources and view the sc
## Monitor scan runs
-1. Go to your Microsoft Purview account -> open **Microsoft Purview governance portal** -> **Data map** -> **Monitoring**. You need to have **Data source admin** role on any collection to access this page. And you'll see the scan runs that belong to the collections on which you have data source admin privilege.
+1. Open the the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Open your Microsoft Purview account and select **Data map** -> **Monitoring**. You need to have **Data source admin** role on any collection to access this page. And you'll see the scan runs that belong to the collections on which you have data source admin privilege.
1. The high-level KPIs show total scan runs within a period. The time period is defaulted at last 30 days, you can also choose to select last seven days. Based on the time filter selected, you can see the distribution of successful, failed, canceled, and in progress scan runs by week or by the day in the graph.
purview How To Receive Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-receive-share.md
This registration is only needed the first time when sharing or receiving data i
## Receive share
-1. You can view your share invitations in any Microsoft Purview account. In the [Azure portal](https://portal.azure.com), search for and select the Microsoft Purview account you want to use to receive the share. Open [the Microsoft Purview governance portal](https://web.purview.azure.com/). Select the **Data Map** icon from the left navigation. Then select **Share invites**. If you received an email invitation, you can also select the **View share invite** link in the email to select a Microsoft Purview account.
+1. You can view your share invitations in any Microsoft Purview account. Open the Microsoft Purview governance portal by:
+
+ * Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ * Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Select the **Data Map** icon from the left navigation. Then select **Share invites**. If you received an email invitation, you can also select the **View share invite** link in the email to select a Microsoft Purview account.
If you're a guest user of a tenant, you'll be asked to verify your email address for the tenant before viewing share invitation for the first time. [You can see our guide below for steps.](#guest-user-verification) Once verified, it's valid for 12 months.
purview How To Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md
-+ Last updated 03/23/2023-+ # How to request access for a data asset
purview How To Use Workflow Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-use-workflow-connectors.md
-+ Last updated 02/22/2023-+ # Workflow connectors
purview How To Use Workflow Dynamic Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-use-workflow-dynamic-content.md
-+ Last updated 03/09/2023-+ # Workflow dynamic content
purview How To Use Workflow Http Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-use-workflow-http-connector.md
-+ Last updated 09/30/2022-+ # Workflows HTTP connector
purview How To View Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-view-self-service-data-access-policy.md
If you need to add or request permissions, follow the [Microsoft Purview permiss
## Steps to view self-service data access policies
-1. Open the Azure portal and launch the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). The Microsoft Purview governance portal can be launched as shown below or by using the [url directly](https://web.purview.azure.com/resource/).
+1. Launch the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). The Microsoft Purview governance portal can be launched as shown below or by using the [url directly](https://web.purview.azure.com/resource/).
:::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-launch-pic-1.png" alt-text="Screenshot showing a Microsoft Purview account open in the Azure portal, with the Microsoft Purview governance portal button highlighted.":::
purview How To Workflow Asset Curation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-asset-curation.md
-+ Last updated 01/03/2023-+
purview How To Workflow Business Terms Approval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-business-terms-approval.md
-+ Last updated 02/20/2023-+
purview How To Workflow Manage Requests Approvals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-requests-approvals.md
-+ Last updated 02/20/2023-+ # Manage workflow requests and approvals
purview How To Workflow Manage Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-runs.md
-+ Last updated 03/23/2023-+ # Manage workflow runs
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
-+ Last updated 02/20/2023-+ # Self-service access workflows for hybrid data estates
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
Follow these steps only if permission model in your Azure Key Vault resource is
Before you can create a Credential, first associate one or more of your existing Azure Key Vault instances with your Microsoft Purview account.
-1. From the [Azure portal](https://portal.azure.com), select your Microsoft Purview account and open the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). Navigate to the **Management Center** in the studio and then navigate to **credentials**.
+1. Open your Microsoft Purview governance portal by:
-2. From the **Credentials** page, select **Manage Key Vault connections**.
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Open the [Azure portal](https://portal.azure.com), search for and select the Microsoft Purview account you want to use to receive the share. Open [the Microsoft Purview governance portal](https://web.purview.azure.com/).
+
+1. Navigate to the **Management Center** in the studio and then navigate to **credentials**.
+
+1. From the **Credentials** page, select **Manage Key Vault connections**.
:::image type="content" source="media/manage-credentials/manage-kv-connections.png" alt-text="Manage Azure Key Vault connections":::
-3. Select **+ New** from the Manage Key Vault connections page.
+1. Select **+ New** from the Manage Key Vault connections page.
-4. Provide the required information, then select **Create**.
+1. Provide the required information, then select **Create**.
-5. Confirm that your Key Vault has been successfully associated with your Microsoft Purview account as shown in this example:
+1. Confirm that your Key Vault has been successfully associated with your Microsoft Purview account as shown in this example:
:::image type="content" source="media/manage-credentials/view-kv-connections.png" alt-text="View Azure Key Vault connections to confirm.":::
purview Quickstart Create Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-create-collection.md
Collections are the Microsoft Purview Data Map's tool to manage ownership and ac
## Check permissions
-In order to create and manage collections in the Microsoft Purview Data Map, you'll need to be a **Collection Admin** within the Microsoft Purview governance portal. We can check these permissions in the [portal](use-azure-purview-studio.md). You can find the studio by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
+In order to create and manage collections in the Microsoft Purview Data Map, you'll need to be a **Collection Admin** within the Microsoft Purview governance portal. We can check these permissions in the [governance portal](use-azure-purview-studio.md). You can find the governance portal by:
+
+* Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+* Open the [Azure portal](https://portal.azure.com), search for and select the Microsoft Purview account you want to use to receive the share. Open [the Microsoft Purview governance portal](https://web.purview.azure.com/).
1. Select Data Map > Collections from the left pane to open collection management page.
In order to create and manage collections in the Microsoft Purview Data Map, you
## Create a collection in the portal
-To create your collection, we'll start in the [Microsoft Purview governance portal](use-azure-purview-studio.md). You can find the portal by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com) and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
+To create your collection, we'll start in the [Microsoft Purview governance portal](use-azure-purview-studio.md). You can find the portal by:
+
+* Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+* Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select Data Map > Collections from the left pane to open collection management page.
purview Quickstart Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-data-share.md
You've now created your share. The recipients of your share will receive an invi
## Receive share
-1. You can view your share invitations in any Microsoft Purview account. In the [Azure portal](https://portal.azure.com), search for and select the Microsoft Purview account you want to use to receive the share. Open [the Microsoft Purview governance portal](https://web.purview.azure.com/). Select the **Data Map** icon from the left navigation. Then select **Share invites**. If you received an email invitation, you can also select the **View share invite** link in the email to select a Microsoft Purview account.
+1. You can view your share invitations in any Microsoft Purview account. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Select the **Data Map** icon from the left navigation. Then select **Share invites**. If you received an email invitation, you can also select the **View share invite** link in the email to select a Microsoft Purview account.
If you're a guest user of a tenant, you'll be asked to verify your email address for the tenant before viewing pending received share for the first time. [You can see our guide for steps.](how-to-receive-share.md#guest-user-verification) Once verified, it's valid for 12 months.
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen1.md
This section will enable you to register the ADLS Gen1 data source and set up an
It is important to register the data source in Microsoft Purview prior to setting up a scan for the data source.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
+1. Open the Microsoft Purview governance portal by:
-1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Sources**
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Navigate to the **Data Map --> Sources**
:::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview governance portal":::
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
This section will enable you to register the ADLS Gen2 data source for scan and
It's important to register the data source in Microsoft Purview prior to setting up a scan for the data source.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
+1. Go to the Microsoft Purview governance portal by:
-1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Sources**
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Navigate to the **Data Map --> Sources**
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview governance portal":::
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
This section will enable you to register the Azure Blob storage account for scan
It is important to register the data source in Microsoft Purview prior to setting up a scan for the data source.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
+1. Go to the Microsoft Purview governance portal by:
-1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Sources**
+ * Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ * Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Navigate to the **Data Map --> Sources**
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview governance portal":::
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
This section will enable you to register the Azure Cosmos DB for NoSQL instance
It is important to register the data source in Microsoft Purview prior to setting up a scan for the data source.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
+1. Open the Microsoft Purview governance portal by:
-1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Collections**
+ * Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ * Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Navigate to the **Data Map --> Collections**
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-open-purview-studio.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
purview Register Scan Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-data-explorer.md
To register using either of these managed identities, follow these steps:
To register a new Azure Data Explorer (Kusto) account in your data catalog, follow these steps:
-1. Navigate to your Microsoft Purview account
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On **Register sources**, select **Azure Data Explorer**
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-files-storage-source.md
When authentication method selected is **Account Key**, you need to get your acc
To register a new Azure Files account in your data catalog, follow these steps:
-1. Navigate to your Microsoft Purview Data Studio.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On **Register sources**, select **Azure Files**
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
To learn how to add permissions on each resource type within a subscription or r
### Steps to register
-1. Go to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left menu. 1. Select **Register**. 1. On **Register sources**, select **Azure (multiple)**.
purview Register Scan Azure Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-mysql-database.md
Follow the instructions in [CREATE DATABASES AND USERS](../mysql/howto-create-us
To register a new Azure Database for MySQL in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register**.
purview Register Scan Azure Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-postgresql.md
Connecting to an Azure Database for PostgreSQL database requires the fully quali
To register a new Azure Database for PostgreSQL in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register**
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
When you're setting up a scan, you can further scope it after providing the data
Before you scan, it's important to register the data source in Microsoft Purview:
-1. In the [Azure portal](https://portal.azure.com), go to the **Microsoft Purview accounts** page and select your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
-1. Under **Open Microsoft Purview Governance Portal**, select **Open**, and then select **Data Map**.
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Navigate to the **Data Map**.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-open-purview-studio.png" alt-text="Screenshot that shows the area for opening a Microsoft Purview governance portal.":::
purview Register Scan Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-managed-instance.md
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
### Steps to register
-1. Navigate to your [Microsoft Purview governance portal](https://web.purview.azure.com/resource/)
+1. Open the Microsoft Purview governance portal by:
-1. Select **Data Map** on the left navigation.
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+
+1. Navigate to the **Data Map**.
1. Select **Register**
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
When authentication method selected is **SQL Authentication**, you need to get y
To register a new SQL dedicated pool in Microsoft Purview, complete the following steps:
-1. Navigate to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+ 1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On **Register sources**, select **Azure Dedicated SQL Pool (formerly SQL DW)**.
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
This section describes how to register Cassandra in Microsoft Purview using the
To register a new Cassandra server in your data catalog:
-1. Go to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left pane. 1. Select **Register**. 1. On the **Register sources** screen, select **Cassandra**, and then select **Continue**:
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
-+ Last updated 10/21/2022-+ # Connect to and manage Db2 in Microsoft Purview
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
This section describes how to register a Google BigQuery project in Microsoft Pu
### Steps to register
-1. Navigate to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register.** 1. On Register sources, select **Google BigQuery** . Select **Continue.**
purview Register Scan Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hdfs.md
-+ Last updated 08/03/2022-+ # Connect to and manage HDFS in Microsoft Purview
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
This section describes how to register a Hive Metastore database in Microsoft Pu
The only supported authentication for a Hive Metastore database is Basic Authentication.
-1. Go to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left pane.
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
An API3 key is required to connect to the Looker server. The API3 key consists i
To register a new Looker server in your data catalog, follow these steps:
-1. Navigate to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register.** 1. On Register sources, select **Looker**. Select **Continue.**
purview Register Scan Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mongodb.md
-+ Last updated 10/21/2022-+ # Connect to and manage MongoDB in Microsoft Purview
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
-+ Last updated 10/21/2022-+ # Connect to and manage MySQL in Microsoft Purview
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
The account must have access to the **master** database. This is because the `sy
### Steps to register
-1. Navigate to your Microsoft Purview account
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Under Sources and scanning in the left navigation, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it is not set up, follow the steps mentioned [here](manage-integration-runtimes.md) to create a self-hosted integration runtime for scanning on an on-premises or Azure VM that has access to your on-premises network.
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
-+ Last updated 10/21/2022-+ # Connect to and manage PostgreSQL in Microsoft Purview
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
-+ Last updated 10/21/2022-+ # Connect to and manage Salesforce in Microsoft Purview
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
The only supported authentication for SAP BW source is **Basic authentication**.
### Steps to register
-1. Navigate to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register**. 1. In **Register sources**, select **SAP BW** > **Continue**.
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
GRANT SELECT ON SCHEMA _SYS_BIC TO <user>;
This section describes how to register a SAP HANA in Microsoft Purview by using [the Microsoft Purview governance portal](https://web.purview.azure.com/).
-1. Go to your Microsoft Purview account.
-
+1. Open the Microsoft Purview governance portal by:
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left pane. 1. Select **Register**.
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
The only supported authentication for SAP ECC source is **Basic authentication**
### Steps to register
-1. Navigate to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **SAP ECC**. Select **Continue.**
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
The only supported authentication for SAP S/4HANA source is **Basic authenticati
### Steps to register
-1. Navigate to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **SAP S/4HANA.** Select **Continue**
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
-+ Last updated 10/21/2022-+ # Connect to and manage Snowflake in Microsoft Purview
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Only a user with at least a *Reader* role on the Azure Synapse workspace and who
### Steps to register
-1. Go to your Microsoft Purview account.
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. On the left pane, select **Sources**. 1. Select **Register**. 1. Under **Register sources**, select **Azure Synapse Analytics (multiple)**.
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
This section describes how to register Teradata in Microsoft Purview using the [
### Steps to register
-1. Navigate to your Microsoft Purview account.
-1. Select **Data Map** on the left navigation.
-1. Select **Register**
-1. On Register sources, select **Teradata**. Select **Continue**
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
+1. Select **Data Map** on the left navigation.
+1. Select **Register**
+1. On Register sources, select **Teradata**. Select **Continue**
:::image type="content" source="media/register-scan-teradata-source/register-sources.png" alt-text="register Teradata options" border="true"::: On the **Register sources (Teradata)** screen, do the following:
-1. Enter a **Name** that the data source will be listed with in the Catalog.
+1. Enter a **Name** that the data source will be listed with in the Catalog.
-1. Enter the **Host** name to connect to a Teradata source. It can also be an IP address of the server.
+1. Enter the **Host** name to connect to a Teradata source. It can also be an IP address of the server.
-1. Select a collection or create a new one (Optional)
+1. Select a collection or create a new one (Optional)
-1. Finish to register the data source.
+1. Finish to register the data source.
:::image type="content" source="media/register-scan-teradata-source/register-sources-2.png" alt-text="register Teradata" border="true":::
purview Scan Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/scan-data-sources.md
In the steps below we'll be using [Azure Blob Storage](register-scan-azure-blob-
>[!IMPORTANT] > These are the general steps for creating a scan, but you should refer to [the source page](microsoft-purview-connector-overview.md) for source-specific prerequistes and scanning instructions.
+1. Open the Microsoft Purview governance portal by:
-1. In the [Azure portal](https://portal.azure.com), open your **Microsoft Purview account** and select the **Open Microsoft Purview governance portal** button.
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Select the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
:::image type="content" source="./media/scan-data-sources/open-purview-studio.png" alt-text="Screenshot of Microsoft Purview window in Azure portal, with the Microsoft Purview governance portal button highlighted." border="true":::
purview Tutorial Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-register-scan-on-premises-sql-server.md
-+ Last updated 11/03/2022-+ # Tutorial: Register and scan an on-premises SQL Server
In this tutorial, you'll learn how to:
## Sign in to the Microsoft Purview governance portal
-To interact with Microsoft Purview, you'll connect to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) through the Azure portal. You can find the studio by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
+To interact with Microsoft Purview, you'll connect to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). You can find the studio by:
+
+- Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+- Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
:::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/open-purview-studio.png" alt-text="Screenshot of Microsoft Purview window in Azure portal, with the Microsoft Purview governance portal button highlighted." border="true":::
If you would like to create a new login and user to be able to scan your SQL ser
## Register SQL Server
-1. Navigate to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and select the [Microsoft Purview governance portal](#sign-in-to-the-microsoft-purview-governance-portal).
+1. Open the Microsoft Purview governance portal by:
+
+ - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
+ - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
1. Under Sources and scanning in the left navigation, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it's not set up, follow the steps mentioned [here](manage-integration-runtimes.md) to create a self-hosted integration runtime for scanning on an on-premises or Azure VM that has access to your on-premises network.
purview Use Microsoft Purview Governance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/use-microsoft-purview-governance-portal.md
This article gives an overview of some of the main features of Microsoft Purview
## Prerequisites
-* An Active Microsoft Purview account is already created in Azure portal and the user has permissions to access [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+* An Active Microsoft Purview account is already created in Azure portal
+* The user has permissions to access [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
## Launch Microsoft Purview account
-* To launch your Microsoft Purview account, go to Microsoft Purview accounts in Azure portal, select the account you want to launch and launch the account.
+* You can launch the Microsoft Purview account directly by going to `https://web.purview.azure.com`, selecting **Azure Active Directory** and the account name. Or by going to `https://web.purview.azure.com/resource/yourpurviewaccountname`
+
+* To launch your Microsoft Purview account from the [Azure portal](https://portal.azure.com), go to Microsoft Purview accounts in Azure portal, select the account you want to launch and launch the account.
:::image type="content" source="./media/use-purview-studio/open-purview-studio.png" alt-text="Screenshot of Microsoft Purview window in Azure portal, with the Microsoft Purview governance portal button highlighted." border="true":::
-* Another way to launch Microsoft Purview account is to go to `https://web.purview.azure.com`, select **Azure Active Directory** and an account name to launch the account.
+>[!TIP]
+>If you can't access the portal, [confirm you have the necessary permissions](catalog-permissions.md#permissions-to-access-the-microsoft-purview-governance-portal).
## Home page
reliability Availability Zones Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-baseline.md
When creating reliable workloads, you can choose at least one of the following a
- **Zone-redundant**. A zone-redundant configuration provides resources that are replicated or distributed across zones automatically.
-In addition to the two availability zone options, zonal and zone-redundant, Azure offers **Global services**, meaning that they're available globally regardless of region. Because these services are always available across regions, they're resilient to both regional and zonal outages. You don't need to configure or enable these services.
+
+In addition to the two availability zone options, zonal and zone-redundant, Azure offers **Global services**, meaning that they're available globally regardless of region. Because these services are always available across regions, they're resilient to both regional and zonal outages.
To see which Azure services support availability zones, see [Availability zone service and regional support](availability-zones-service-support.md). >[!NOTE]
->When you don't select a zone configuration for your resource, whether zonal or zone-redundant, the resource and its sub-components won't be zone resilient and can go down during a zonal outage in that region.
+>When you don't select a zone configuration for your resource, either zonal or zone-redundant, the resource and its sub-components won't be zone resilient and can go down during a zonal outage in that region.
## Considerations for migrating to availability zone support
- There are many ways to create a reliable Azure application with availability zones that meet both SLAs and reliability targets. Follow the steps in this section to choose the right approach for your needs based on technical and regulatory considerations, service capabilities, data residency, compliance requirements, and latency.
+
+There are a number of possible ways to create a reliable Azure application with availability zones that meet both SLAs and reliability targets. Follow the steps below to choose the right approach for your needs based on technical and regulatory considerations, service capabilities, data residency, compliance requirements, and latency.
### Step 1: Check if the Azure region supports availability zones
-In this first step, you need to [validate](availability-zones-service-support.md) that your selected Azure region support availability zones and the required Azure services for your application.
+In this first step, you'll need to [validate](availability-zones-service-support.md) that your selected Azure region support availability zones as well as the required Azure services for your application.
+ If your region supports availability zones, we highly recommended that you configure your workload for availability zones. If your region doesn't support availability zones, you'll need to use [Azure Resource Mover guidance](/azure/resource-mover/move-region-availability-zone) to migrate to a region that offers availability zone support.
To check for regional support of services, see [Products available by region](ht
To list the available VM SKUs by Azure region and zone, see [Check VM SKU availability](/azure/virtual-machines/windows/create-powershell-availability-zone#check-vm-sku-availability).
-If your region doesn't support the services and SKUs that your application requires, you'll need to go back to [Step 1: Check the product availability in the Azure region](#step-1-check-if-the-azure-region-supports-availability-zones) to find a new region.
+If your region doesn't support the services and SKUs that your application requires, you'll need to go back to [Step 1: Check the product availability in the Azure region](#step-1-check-if-the-azure-region-supports-availability-zones) to find a new region that supports the services and SKUs that your application requires. We highly recommended that you configure your workload with zone-redundancy.
- the services and SKUs that your application requires, we highly recommended that you configure your workload with zone-redundancy. For zonal high availability of Azure IaaS Virtual Machines, use [Virtual Machine Scale Sets Flex](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes) to spread VMs across multiple availability zones.
+For zonal high availability of Azure IaaS Virtual Machines, use [Virtual Machine Scale Sets (VMSS) Flex](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes) to spread VMs across multiple availability zones.
### Step 3: Consider your application requirements In this final step, you'll determine, based on application requirements, which kind of availability zone support is most suitable to your application.
-Below are three important questions that can help you choose the correct availability zone deployment:
+Below are three important questions that will help you choose the correct availability zone deployment:
#### Does your application include latency sensitive components?
For a [distributed microservices model](/azure/architecture/guide/architecture-s
With a zonal deployment, you must: 1. Identify latency sensitive resources or services in your architecture.+ 1. Confirm that the latency sensitive resources or services support zonal deployment.+ 1. Co-locate the latency sensitive resources or services in same zone. Other services in your architecture may continue to remain zone redundant.
-1. Replicate the latency sensitive zonal services across multiple availability zones to ensure you're zone resilient.
+
+1. Replicate the latency sensitive zonal services across multiple availability zones to ensure zone resiliency.
+ 1. Load balance between the multiple zonal deployments with a standard or global load balancers. If the Azure service supports availability zones, we highly recommend that you use zone-redundancy by spreading nodes across the zones to get higher uptime SLA and protection against zonal outages.
-For a 3-tier application, it's important to understand the state (stateful or stateless) of each tier (application, business, and data). State knowledge helps you to architect in alignment with the best practices and guidance according to the type of workload.
-For specialized workload on Azure as below examples, refer to the respective landing zone architecture guidance and best practices.
+For a 3-tier application it is important to understand the application, business, and data tiers; as well as their state (stateful or stateless) to architect in alignment with the best practices and guidance according to the type of workload.
+
+For specialized workloads on Azure as below examples, please refer to the respective landing zone architecture guidance and best practices.
+ - SAP - [SAP workload configurations with Azure Availability Zones](/azure/sap/workloads/high-availability-zones)
For specialized workload on Azure as below examples, refer to the respective lan
- [Oracle on Azure architecture design](/azure/architecture/solution-ideas/articles/oracle-on-azure-start-here )
-#### Do you want to achieve BCDR in the same Azure region due to compliance, data residency, or governance requirements?
+#### Do you want to achieve Business Continuity and Disaster Recovery in the same Azure region due to compliance, data residency, or governance requirements?
To achieve business continuity and disaster recovery within the same region and when there **is no regional pair**, we highly recommend that you configure your workload with zone-redundancy. A single-region approach is also applicable to certain industries that have strict data residency and governance requirements within the same Azure region. To learn how to replicate, failover, and failback Azure virtual machines from one availability zone to another within the same Azure region, see [Enable Azure VM disaster recovery between availability zones](/azure/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery).
reliability Migrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-configuration.md
Title: Migrate App Configuration to a region with availability zone support description: Learn how to migrate Azure App Configuration to availability zone support.-+ Last updated 09/10/2022-+
search Search Get Started Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-terraform.md
Title: 'Quickstart: Create an Azure Cognitive Search service using Terraform' description: 'In this article, you create an Azure Cognitive Search service using Terraform' Previously updated : 3/28/2023- Last updated : 4/14/2023+
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Previously updated : 01/26/2023 Last updated : 04/14/2023
More details about using [Azure AD authentication with the Azure SDK for .NET](h
+## Test as current user
+
+If you're already a Contributor or Owner of your search service, you can present a bearer token for your user identity for authentication to Azure Cognitive Search. The following instructions explain how to set up a Postman collection to send requests as the current user.
+
+1. Get a bearer token for the current user:
+
+ ```azurecli
+ az account get-access-token https://search.azure.com/.default
+ ```
+
+1. Start a new Postman collection and edit its properties. In the **Variables** tab, create the following variable:
+
+ | Variable | Description |
+ |-|-|
+ | bearerToken | (copy-paste from get-access-token output on the command line) |
+
+1. In the Authorization tab, select **Bearer Token** as the type.
+
+1. In the **Token** field, specify the variable placeholder `{{bearerToken}}`.
+
+1. Save the collection.
+
+1. Send a request to confirm access. Here's one that queries the hotels-quickstart index:
+
+ ```http
+ POST https://<service-name>.search.windows.net/indexes/hotels-quickstart/docs/search?api-version=2020-06-30
+ {
+ "queryType": "simple",
+ "search": "motel",
+ "filter": "",
+ "select": "HotelName,Description,Category,Tags",
+ "count": true
+ }
+ ```
+ <a name="rbac-single-index"></a> ## Grant access to a single index
sentinel Ci Cd Custom Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-content.md
A sample repository is available with ARM templates for each of the content type
## Improve performance with smart deployments
+> [!TIP]
+> To ensure smart deployments works in GitHub, Workflows must have read and write permissions on your repositoriy. See [Managing GitHub Actions settings for a repository](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository) for more details.
+>
+ The **smart deployments** feature is a back-end capability that improves performance by actively tracking modifications made to the content files of a connected repository. It uses a CSV file within the '.sentinel' folder in your repository to audit each commit. The workflow avoids redeploying content that hasn't been modified since the last deployment. This process improves your deployment performance and prevents tampering with unchanged content in your workspace, such as resetting dynamic schedules of your analytics rules. Smart deployments are enabled by default on newly created connections. If you prefer all source control content to be deployed every time a deployment is triggered, regardless of whether that content was modified or not, you can modify your workflow to disable smart deployments. For more information, see [Customize the workflow or pipeline](ci-cd-custom-deploy.md#customize-the-workflow-or-pipeline).
Get more examples and step by step instructions on deploying Microsoft Sentinel
- [Deploy custom content from your repository](ci-cd.md) - [Sentinel CICD sample repository](https://github.com/SentinelCICD/RepositoriesSampleContent)-- [Automate Sentinel integration with DevOps](/azure/architecture/example-scenario/devops/automate-sentinel-integration#microsoft-sentinel-repositories)
+- [Automate Sentinel integration with DevOps](/azure/architecture/example-scenario/devops/automate-sentinel-integration#microsoft-sentinel-repositories)
sentinel Ci Cd Custom Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-deploy.md
There are two primary ways to customize the deployment of your repository conten
Microsoft Sentinel currently supports connections to GitHub and Azure DevOps repositories. Before connecting your Microsoft Sentinel workspace to your source control repository, make sure that you have: - An **Owner** role in the resource group that contains your Microsoft Sentinel workspace *or* a combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection-- Contributor access to your GitHub or Azure DevOps repository
+- Collaborator access to your GitHub repository or Project Administrator access to your Azure DevOps repository
- Actions enabled for GitHub and Pipelines enabled for Azure DevOps - Ensure custom content files you want to deploy to your workspaces are in relevant [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/index.yml).
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
When creating custom content, you can manage it from your own Microsoft Sentinel
Microsoft Sentinel currently supports connections to GitHub and Azure DevOps repositories. Before connecting your Microsoft Sentinel workspace to your source control repository, make sure that you have: - An **Owner** role in the resource group that contains your Microsoft Sentinel workspace *or* a combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection-- Contributor access to your GitHub or Azure DevOps repository
+- Collaborator access to your GitHub repository or Project Administrator access to your Azure DevOps repository
- Actions enabled for GitHub and Pipelines enabled for Azure DevOps - Third-party application access via OAuth enabled for Azure DevOps [application connection policies](/azure/devops/organizations/accounts/change-application-access-policies#manage-a-policy). - Ensure custom content files you want to deploy to your workspaces are in relevant [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/index.yml).
service-bus-messaging Service Bus Dead Letter Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dead-letter-queues.md
There's no automatic cleanup of the DLQ. Messages remain in the DLQ until you ex
## DLQ message count
-It's not possible to obtain count of messages in the dead-letter queue at the topic level. That's because messages don't sit at the topic level unless Service Bus throws an internal error. Instead, when a sender sends a message to a topic, the message is forwarded to subscriptions for the topic within milliseconds and thus no longer resides at the topic level. So, you can see messages in the DLQ associated with the subscription for the topic. In the following example, **Service Bus Explorer** shows that there are 62 messages currently in the DLQ for the subscription "test1".
-![DLQ message count](./media/service-bus-dead-letter-queues/dead-letter-queue-message-count.png)
+It's not possible to obtain count of messages in the dead-letter queue at the topic level. That's because messages don't sit at the topic level. Instead, when a sender sends a message to a topic, the message is forwarded to subscriptions for the topic within milliseconds and thus no longer resides at the topic level. So, you can see messages in the DLQ associated with the subscription for the topic. In the following example, **Service Bus Explorer** shows that there are 62 messages currently in the DLQ for the subscription "test1".
+ You can also get the count of DLQ messages by using Azure CLI command: [`az servicebus topic subscription show`](/cli/azure/servicebus/topic/subscription#az-servicebus-topic-subscription-show). ## Moving messages to the DLQ+ There are several activities in Service Bus that cause messages to get pushed to the DLQ from within the messaging engine itself. An application can also explicitly move messages to the DLQ. The following two properties (dead-letter reason and dead-letter description) are added to dead-lettered messages. Applications can define their own codes for the dead-letter reason property, but the system sets the following values. | Dead-letter reason | Dead-letter error description |
There are several activities in Service Bus that cause messages to get pushed to
| MaxDeliveryCountExceeded | Message couldn't be consumed after maximum delivery attempts. See the [Maximum delivery count](#maximum-delivery-count) section for details. | ## Maximum delivery count+ There is a limit on number of attempts to deliver messages for Service Bus queues and subscriptions. The default value is 10. Whenever a message has been delivered under a peek-lock, but has been either explicitly abandoned or the lock has expired, the delivery count on the message is incremented. When the delivery count exceeds the limit, the message is moved to the DLQ. The dead-letter reason for the message in DLQ is set to: MaxDeliveryCountExceeded. This behavior can't be disabled, but you can set the max delivery count to a large number. ## Time to live+ When you enable dead-lettering on queues or subscriptions, all expiring messages are moved to the DLQ. The dead-letter reason code is set to: TTLExpiredException. Deferred messages will not be purged and moved to the dead-letter queue after they expire. This behavior is by design. ## Errors while processing subscription rules+ If you enable dead-lettering on filter evaluation exceptions, any errors that occur while a subscription's SQL filter rule executes are captured in the DLQ along with the offending message. Don't use this option in a production environment in which not all message types have subscribers. ## Application-level dead-lettering+ In addition to the system-provided dead-lettering features, applications can use the DLQ to explicitly reject unacceptable messages. They can include messages that can't be properly processed because of any sort of system issue, messages that hold malformed payloads, or messages that fail authentication when some message-level security scheme is used. This can be done by calling [QueueClient.DeadLetterAsync(Guid lockToken, string deadLetterReason, string deadLetterErrorDescription) method](/dotnet/api/microsoft.servicebus.messaging.queueclient.deadletterasync#microsoft-servicebus-messaging-queueclient-deadletterasync(system-guid-system-string-system-string)).
This can be done by calling [QueueClient.DeadLetterAsync(Guid lockToken, string
It is recommended to include the type of the exception in the DeadLetterReason and the StackTrace of the exception in the DeadLetterDescription as this makes it easier to troubleshoot the cause of the problem resulting in messages being dead-lettered. Be aware that this may result in some messages exceeding [the 256KB quota limit for the Standard Tier of Azure Service Bus](./service-bus-quotas.md), further indicating that the Premium Tier is what should be used for production environments. ## Dead-lettering in ForwardTo or SendVia scenarios+ Messages will be sent to the transfer dead-letter queue under the following conditions: - A message passes through more than four queues or topics that are [chained together](service-bus-auto-forwarding.md).
Messages will be sent to the transfer dead-letter queue under the following cond
- The destination queue or topic exceeds the maximum entity size. ## Path to the dead-letter queue+ You can access the dead-letter queue by using the following syntax: ```
You can access the dead-letter queue by using the following syntax:
``` +++++++++ ## Sending dead-lettered messages to be reprocessed+ As there can be valuable business data in messages that ended up in the dead-letter queue, it is desirable to have those messages be reprocessed when operators have finished dealing with the circumstances which caused the messages to be dead-lettered in the first place. Tools like [Azure Service Bus Explorer](./explorer.md) enable manual moving of messages between queues and topics. If there are many messages in the dead-letter queue that need to be moved, [code like this](https://stackoverflow.com/a/68632602/151350) can help move them all at once. Operators will often prefer having a user interface so they can troubleshoot which message types have failed processing, from which source queues, and for what reasons, while still being able to resubmit batches of messages to be reprocessed. Tools like [ServicePulse with NServiceBus](https://docs.particular.net/servicepulse/intro-failed-messages) provide these capabilities. ## Next steps+ See [Enable dead lettering for a queue or subscription](enable-dead-letter.md) to learn about different ways of configuring the **dead lettering on message expiration** setting.+
service-bus-messaging Service Bus Java How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-queues.md
description: This tutorial shows you how to send messages to and receive message
Last updated 04/12/2023 ms.devlang: java-+ # Send messages to and receive messages from Azure Service Bus queues (Java)
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
### [Passwordless (Recommended)](#tab/passwordless) > [!IMPORTANT]
- > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
- > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
+ > Replace `NAMESPACENAME` with the name of your Service Bus namespace.
```java static void sendMessage() { // create a token using the default Azure credential DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
- .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
.build(); ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
### [Passwordless (Recommended)](#tab/passwordless) > [!IMPORTANT]
- > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
- > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
+ > Replace `NAMESPACENAME` with the name of your Service Bus namespace.
```java
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
{ // create a token using the default Azure credential DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
- .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
.build(); ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
In this section, you add code to retrieve messages from the queue.
> [!IMPORTANT] > - Replace `NAMESPACENAME` with the name of your Service Bus namespace. > - Replace `QueueTest` in `QueueTest::processMessage` in the code with the name of your class.
- > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
- ```java // handles received messages
In this section, you add code to retrieve messages from the queue.
CountDownLatch countdownLatch = new CountDownLatch(1); DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
- .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
.build(); ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder()
service-bus-messaging Service Bus Java How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions.md
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
``` > [!IMPORTANT]
- > Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. Replace `<TOPIC NAME>` with the name of the topic, and `<SUBSCRIPTION NAME>` with the name of the topic's subscription.
+ > Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. Replace `<TOPIC NAME>` with the name of the topic, and `<SUBSCRIPTION NAME>` with the name of the
+ > [!IMPORTANT]
+ > Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. Replace `<TOPIC NAME>` with the name of the topic, and `<SUBSCRIPTION NAME>` with the name of the topic's subscription.
3. Add a method named `sendMessage` in the class to send one message to the topic. ### [Passwordless (Recommended)](#tab/passwordless) > [!IMPORTANT]
- > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
- > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
+ > Replace `NAMESPACENAME` with the name of your Service Bus namespace.
```java static void sendMessage() { // create a token using the default Azure credential DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
- .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
.build(); ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
### [Passwordless (Recommended)](#tab/passwordless) > [!IMPORTANT]
- > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
- > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
-
+ > Replace `NAMESPACENAME` with the name of your Service Bus namespace.
```java static void sendMessageBatch() { // create a token using the default Azure credential DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
- .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
.build(); ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
In this section, you add code to retrieve messages from a subscription to the to
> [!IMPORTANT] > - Replace `NAMESPACENAME` with the name of your Service Bus namespace. > - Replace `ServiceBusTopicTest` in `ServiceBusTopicTest::processMessage` in the code with the name of your class.
- > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
```java // handles received messages
In this section, you add code to retrieve messages from a subscription to the to
CountDownLatch countdownLatch = new CountDownLatch(1); DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
- .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
.build(); // Create an instance of the processor through the ServiceBusClientBuilder
service-connector Concept Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-availability.md
Title: High availability for Service Connector description: This article covers availability zones, zone redundancy, disaster recovery, and cross-region failover for Service Connector.--++ Last updated 05/24/2022
service-connector Concept Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-region-support.md
Title: Service Connector Region Support description: Service Connector region availability and region support list--++ Last updated 09/19/2022
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md
Title: Service Connector internals description: Learn about Service Connector internals, the architecture, the connections and how data is transmitted.--++
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
Title: Integrate Azure App Configuration with Service Connector description: Integrate Azure App Configuration into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-confluent-kafka.md
Title: Integrate Apache kafka on Confluent Cloud with Service Connector description: Integrate Apache kafka on Confluent Cloud into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Cosmos Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md
Title: Integrate the Azure Cosmos DB for Apache Cassandra with Service Connector description: Integrate the Azure Cosmos DB for Apache Cassandra into your application with Service Connector--++ Last updated 09/19/2022
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
Title: Integrate Azure Cosmos DB for MongoDB with Service Connector description: Integrate Azure Cosmos DB for MongoDB into your application with Service Connector--++ Last updated 09/19/2022
service-connector How To Integrate Cosmos Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-gremlin.md
Title: Integrate the Azure Cosmos DB for Apache Gremlin with Service Connector description: Integrate the Azure Cosmos DB for Apache Gremlin into your application with Service Connector--++ Last updated 09/19/2022
service-connector How To Integrate Cosmos Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-sql.md
Title: Integrate the Azure Cosmos DB for NoSQL with Service Connector description: Integrate the Azure Cosmos DB SQL into your application with Service Connector--++ Last updated 09/19/2022
service-connector How To Integrate Cosmos Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-table.md
Title: Integrate the Azure Cosmos DB for Table with Service Connector description: Integrate the Azure Cosmos DB for Table into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md
Title: Integrate Azure Event Hubs with Service Connector description: Integrate Azure Event Hubs into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md
Title: Integrate Azure Key Vault with Service Connector description: Integrate Azure Key Vault into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
Title: Integrate Azure Database for MySQL with Service Connector description: Integrate Azure Database for MySQL into your application with Service Connector--++ Last updated 11/29/2022
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
Title: Integrate Azure Database for PostgreSQL with Service Connector description: Integrate Azure Database for PostgreSQL into your application with Service Connector--++ Last updated 11/29/2022
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md
Title: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise with Service Connector description: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
Title: Integrate Azure Service Bus with Service Connector description: Integrate Service Bus into your application with Service Connector--++
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md
Title: Integrate Azure SignalR Service with Service Connector description: Integrate Azure SignalR Service into your application with Service Connector. Learn about authentication types and client types of Azure SignalR Service.--++ Last updated 08/11/2022
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
Title: Integrate Azure SQL Database with Service Connector description: Integrate SQL into your application with Service Connector--++ Last updated 11/29/2022
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
Title: Integrate Azure Blob Storage with Service Connector description: Integrate Azure Blob Storage into your application with Service Connector--++
service-connector How To Integrate Storage File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-file.md
Title: Integrate Azure Files with Service Connector description: Integrate Azure Files into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
Title: Integrate Azure Queue Storage with Service Connector description: Integrate Azure Queue Storage into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
Title: Integrate Azure Table Storage with Service Connector description: Integrate Azure Table Storage into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Integrate Web Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-web-pubsub.md
Title: Integrate Azure Web PubSub with service connector description: Integrate Azure Web PubSub into your application with Service Connector--++ Last updated 08/11/2022
service-connector How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-manage-authentication.md
Title: Manage authentication in Service Connector description: Learn how to select and manage authentication parameters in Service Connector. -+ Last updated 03/07/2023-+ # Manage authentication within Service Connector
service-connector How To Troubleshoot Front End Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-troubleshoot-front-end-error.md
Title: Service Connector troubleshooting guidance description: This article lists error messages and suggested actions of Service Connector to use for troubleshooting issues.--++ Last updated 5/25/2022
service-connector Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/known-limitations.md
Last updated 03/02/2023--++ # Known limitations of Service Connector
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
Title: What is Service Connector? description: Understand typical use case scenarios for Service Connector, and learn the key benefits of Service Connector.--++
service-connector Quickstart Cli App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-app-service-connection.md
Title: Quickstart - Create a service connection in App Service with the Azure CLI description: Quickstart showing how to create a service connection in App Service with the Azure CLI--++ Last updated 04/13/2023
service-connector Quickstart Cli Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-container-apps.md
Title: Quickstart - Create a service connection in Container Apps using the Azure CLI description: Quickstart showing how to create a service connection in Azure Container Apps using the Azure CLI--++ Last updated 04/13/2023
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Title: Quickstart - Create a service connection in Azure Spring Apps with the Azure CLI description: Quickstart showing how to create a service connection in Azure Spring Apps with the Azure CLI displayName: --++ Last updated 04/13/2022
service-connector Quickstart Portal App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-app-service-connection.md
Title: Quickstart - Create a service connection in App Service from the Azure portal description: Quickstart showing how to create a service connection in App Service from the Azure portal--++
service-connector Quickstart Portal Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-container-apps.md
Title: Quickstart - Create a service connection in Container Apps from the Azure portal description: Quickstart showing how to create a service connection in Azure Container Apps from the Azure portal--++
service-connector Quickstart Portal Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-spring-cloud-connection.md
Title: Create a service connection in Azure Spring Apps from the Azure portal description: This quickstart shows you how to create a service connection in Azure Spring Apps from the Azure portal.--++ Last updated 08/10/2022
service-connector Tutorial Connect Web App App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-connect-web-app-app-configuration.md
Title: 'Tutorial: Connect a web app to Azure App Configuration with Service Connector' description: Learn how you can connect an ASP.NET Core application hosted in Azure Web Apps to App Configuration using Service Connector'--++ Last updated 10/24/2022
service-connector Tutorial Csharp Webapp Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-csharp-webapp-storage-cli.md
Title: 'Tutorial: Deploy a web application connected to Azure Blob Storage with Service Connector' description: Create a web app connected to Azure Blob Storage with Service Connector.--++ Last updated 05/03/2022
service-connector Tutorial Django Webapp Postgres Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-django-webapp-postgres-cli.md
Title: 'Tutorial: Using Service Connector to build a Django app with Postgres on
description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework, the app is hosted on Azure App Service on Linux, and the App Service and Database is connected with Service Connector. ms.devlang: python --++ Last updated 05/03/2022
service-connector Tutorial Java Spring Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-confluent-kafka.md
Title: 'Tutorial: Deploy a Spring Boot app connected to Apache Kafka on Confluen
description: Create a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Apps. ms.devlang: java --++ Last updated 05/03/2022
service-connector Tutorial Java Spring Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-mysql.md
Title: 'Tutorial: Deploy an application to Azure Spring Apps and connect it to Azure Database for MySQL Flexible Server using Service Connector' description: Create a Spring Boot application connected to Azure Database for MySQL Flexible Server with Service Connector.--++ Last updated 11/02/2022
service-connector Tutorial Portal Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-portal-key-vault.md
Title: Tutorial - Create a service connection and store secrets into Key Vault description: Tutorial showing how to create a service connection and store secrets into Key Vault--++
spatial-anchors Reliability Spatial Anchors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/concepts/reliability-spatial-anchors.md
Title: Resiliency in Azure Spatial Anchors #Required; Must be "Resiliency in *your official service name*"
-description: Find out about reliability in Azure Spatial Anchors #Required;
--
+ Title: Resiliency in Azure Spatial Anchors
+description: Find out about reliability in Azure Spatial Anchors
++ Previously updated : 11/18/2022 #Required; mm/dd/yyyy format. Last updated : 11/18/2022 #Customer intent: As a customer, I want to understand reliability support for Azure Spatial Anchors so that I can respond to and/or avoid failures in order to minimize downtime and data loss.
spring-apps Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-mysql.md
Title: "Quickstart - Integrate with Azure Database for MySQL" description: Explains how to provision and prepare an Azure Database for MySQL instance, and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database with only one command.--++ Last updated 08/28/2022
storage-mover Agent Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-deploy.md
Title: How to deploy an Azure Storage Mover agent. #Required; page title is displayed in search results. Include the brand.
-description: Learn how to deploy an Azure Mover agent #Required; article description that is displayed in search results.
+ Title: How to deploy an Azure Storage Mover agent.
+description: Learn how to deploy an Azure Mover agent
storage-mover Agent Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md
Title: How to register an Azure Storage Mover agent #Required; page title is displayed in search results. Include the brand.
-description: Learn about agent VM registration to run your migration jobs. #Required; article description that is displayed in search results.
+ Title: How to register an Azure Storage Mover agent
+description: Learn about agent VM registration to run your migration jobs.
storage-mover Project Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/project-manage.md
Title: How to manage Azure Mover projects #Required; page title is displayed in search results. Include the brand.
-description: Learn how to manage Azure Mover projects #Required; article description that is displayed in search results.
+ Title: How to manage Azure Mover projects
+description: Learn how to manage Azure Mover projects
storage-mover Storage Mover Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/storage-mover-create.md
Title: How to create a storage mover resource #Required; page title is displayed in search results. Include the brand.
-description: Learn how to create a top-level Azure Storage Mover resource #Required; article description that is displayed in search results.
+ Title: How to create a storage mover resource
+description: Learn how to create a top-level Azure Storage Mover resource
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
Title: Create and manage a blob snapshot in .NET
+ Title: Create and manage a blob snapshot with .NET
description: Learn how to use the .NET client library to create a read-only snapshot of a blob to back up blob data at a given moment in time.
Last updated 08/27/2020 ms.devlang: csharp-+
-# Create and manage a blob snapshot in .NET
+# Create and manage a blob snapshot with .NET
A snapshot is a read-only version of a blob that's taken at a point in time. This article shows how to create and manage blob snapshots using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
To create a snapshot of a block blob, use one of the following methods:
The following code example shows how to create a snapshot. Include a reference to the [Azure.Identity](https://www.nuget.org/packages/azure.identity) library to use your Azure AD credentials to authorize requests to the service. For more information about using the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) class to authorize a managed identity to access Azure Storage, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). ```csharp
-private static async Task CreateBlockBlobSnapshot(string accountName, string containerName, string blobName, Stream data)
+private static async Task CreateBlockBlobSnapshot(
+ string accountName,
+ string containerName,
+ string blobName,
+ Stream data)
{ const string blobServiceEndpointSuffix = ".blob.core.windows.net";
- Uri containerUri = new Uri("https://" + accountName + blobServiceEndpointSuffix + "/" + containerName);
+ Uri containerUri =
+ new Uri("https://" + accountName + blobServiceEndpointSuffix + "/" + containerName);
// Get a container client object and create the container. BlobContainerClient containerClient = new BlobContainerClient(containerUri,
The following code example shows how to delete a blob and its snapshots in .NET,
await blobClient.DeleteIfExistsAsync(DeleteSnapshotsOption.IncludeSnapshots, null, default); ```
-## Next steps
+## Copy a blob snapshot over the base blob
-- [Blob snapshots](snapshots-overview.md)-- [Blob versions](versioning-overview.md)-- [Soft delete for blobs](./soft-delete-blob-overview.md)
+You can perform a copy operation to promote a snapshot over its base blob, as long as the base blob is in an online tier (hot or cool). The snapshot remains, but its destination is overwritten with a copy that can be read and written to.
+
+The following code example shows how to copy a blob snapshot over the base blob:
+ ## Resources
-For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#create-a-snapshot).
+To learn more about managing blob snapshots using the Azure Blob Storage client library for .NET, see the following resources.
+
+For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#create-a-snapshot).
++
+### See also
+
+- [Blob snapshots](snapshots-overview.md)
+- [Blob versions](versioning-overview.md)
+- [Soft delete for blobs](./soft-delete-blob-overview.md)
storage Storage Blob Copy Async Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md
+
+ Title: Copy a blob with asynchronous scheduling using .NET
+
+description: Learn how to copy a blob with asynchronous scheduling in Azure Storage by using the .NET client library.
+++ Last updated : 04/11/2023+++
+ms.devlang: csharp
+++
+# Copy a blob with asynchronous scheduling using .NET
+
+This article shows how to copy a blob with asynchronous scheduling using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL. You can also abort a pending copy operation.
+
+The client library methods covered in this article use the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and can be used when you want to perform a copy with asynchronous scheduling. For most copy scenarios where you want to move data into a storage account and have a URL for the source object, see [Copy a blob from a source object URL with .NET](storage-blob-copy-url-dotnet.md).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Copy Blob](/rest/api/storageservices/copy-blob#authorization)
+ - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization)
+- Packages installed to your project directory. These examples use **Azure.Storage.Blobs**. If you're using `DefaultAzureCredential` for authorization, you also need **Azure.Identity**. To learn more about setting up your project, see [Get Started with Azure Storage and .NET](storage-blob-dotnet-get-started.md#set-up-your-project). To see the necessary `using` directives, see [Code samples](#code-samples).
+
+## About copying blobs with asynchronous scheduling
+
+The `Copy Blob` operation can finish asynchronously and is performed on a best-effort basis, which means that the operation isn't guaranteed to start immediately or complete within a specified time frame. The copy operation is scheduled in the background and performed as the server has available resources. The operation can complete synchronously if the copy occurs within the same storage account.
+
+A `Copy Blob` operation can perform any of the following actions:
+
+- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or it can be a new blob created by the copy operation.
+- Copy a source blob to a destination blob with the same name, which replaces the destination blob. This type of copy operation removes any uncommitted blocks and overwrites the destination blob's metadata.
+- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported.
+- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob.
+- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot.
+
+The source blob for a copy operation may be one of the following types: block blob, append blob, page blob, blob snapshot, or blob version. The copy operation always copies the entire source blob or file. Copying a range of bytes or set of blocks isn't supported.
+
+If the destination blob already exists, it must be of the same blob type as the source blob, and the existing destination blob is overwritten. The destination blob can't be modified while a copy operation is in progress, and a destination blob can only have one outstanding copy operation.
+
+To learn more about the `Copy Blob` operation, including information about properties, index tags, metadata, and billing, see [Copy Blob remarks](/rest/api/storageservices/copy-blob#remarks).
+
+## Copy a blob with asynchronous scheduling
+
+This section gives an overview of methods provided by the Azure Storage client library for .NET to perform a copy operation with asynchronous scheduling.
+
+The following methods wrap the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and begin an asynchronous copy of data from the source blob:
+
+- [StartCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuri)
+- [StartCopyFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuriasync)
+
+The `StartCopyFromUri` and `StartCopyFromUriAsync` methods return a [CopyFromUriOperation](/dotnet/api/azure.storage.blobs.models.copyfromurioperation) object containing information about the copy operation. These methods are used when you want asynchronous scheduling for a copy operation.
+
+## Copy a blob within the same storage account
+
+If you're copying a blob within the same storage account, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key. The operation can complete synchronously if the copy occurs within the same storage account.
+
+The following example shows a scenario for copying a source blob within the same storage account. This example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
++
+## Copy a blob from another storage account
+
+If the source is a blob in another storage account, the source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
+
+The following example shows a scenario for copying a blob from another storage account. In this example, we create a source blob URI with an appended service SAS token by calling [GenerateSasUri](/dotnet/api/azure.storage.blobs.blobcontainerclient.generatesasuri) on the blob client. To use this method, the source blob client needs to be authorized via account key.
++
+If you already have a SAS token, you can construct the URI for the source blob as follows:
+
+```csharp
+// Append the SAS token to the URI - include ? before the SAS token
+var sourceBlobSASURI = new Uri(
+ $"https://{srcAccountName}.blob.core.windows.net/{srcContainerName}/{srcBlobName}?{sasToken}");
+```
+
+You can also [create a user delegation SAS token with .NET](storage-blob-user-delegation-sas-create-dotnet.md). User delegation SAS tokens offer greater security, as they're signed with Azure AD credentials instead of an account key.
+
+## Copy a blob from a source outside of Azure
+
+You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
++
+## Check the status of a copy operation
+
+To check the status of a `Copy Blob` operation, you can call [UpdateStatusAsync](/dotnet/api/azure.storage.blobs.models.copyfromurioperation.updatestatusasync#azure-storage-blobs-models-copyfromurioperation-updatestatusasync(system-threading-cancellationtoken)) and parse the response to get the value for the `x-ms-copy-status` header.
+
+The following code example shows how to check the status of a copy operation:
++
+## Abort a copy operation
+
+Aborting a pending `Copy Blob` operation results in a destination blob of zero length. However, the metadata for the destination blob has the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods.
+
+To abort a pending copy operation, call one of the following operations:
+- [AbortCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.abortcopyfromuri)
+- [AbortCopyFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.abortcopyfromuriasync)
+
+These methods wrap the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) REST API operation, which cancels a pending `Copy Blob` operation. The following code example shows how to abort a pending `Copy Blob` operation:
++
+## Resources
+
+To learn more about copying blobs using the Azure Blob Storage client library for .NET, see the following resources.
+
+### REST API operations
+
+The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods covered in this article use the following REST API operations:
+
+- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)
+- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/CopyBlob.cs)
+
storage Storage Blob Copy Url Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md
+
+ Title: Copy a blob from a source object URL with .NET
+
+description: Learn how to copy a blob from a source object URL in Azure Storage by using the .NET client library.
+++ Last updated : 04/11/2023+++
+ms.devlang: csharp
+++
+# Copy a blob from a source object URL with .NET
+
+This article shows how to copy a blob from a source object URL using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL.
+
+The client library methods covered in this article use the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) and [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operations. These methods are preferred for copy scenarios where you want to move data into a storage account and have a URL for the source object. For copy operations where you want asynchronous scheduling, see [Copy a blob with asynchronous scheduling using .NET](storage-blob-copy-async-dotnet.md).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url#authorization)
+ - [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization)
+- Packages installed to your project directory. These examples use **Azure.Storage.Blobs**. If you're using `DefaultAzureCredential` for authorization, you also need **Azure.Identity**. To learn more about setting up your project, see [Get Started with Azure Storage and .NET](storage-blob-dotnet-get-started.md#set-up-your-project). To see the necessary `using` directives, see [Code samples](#code-samples).
+
+## About copying blobs from a source object URL
+
+The `Put Blob From URL` operation creates a new block blob where the contents of the blob are read from a given URL. The operation completes synchronously.
+
+The source can be any object retrievable via a standard HTTP GET request on the given URL. This includes block blobs, append blobs, page blobs, blob snapshots, blob versions, or any accessible object inside or outside Azure.
+
+When the source object is a block blob, all committed blob content is copied. The content of the destination blob is identical to the content of the source, but the committed block list isn't preserved and uncommitted blocks aren't copied.
+
+The destination is always a block blob, either an existing block blob, or a new block blob created by the operation. The contents of an existing blob are overwritten with the contents of the new blob.
+
+The `Put Blob From URL` operation always copies the entire source blob. Copying a range of bytes or set of blocks isn't supported. To perform partial updates to a block blobΓÇÖs contents by using a source URL, use the [Put Block From URL](/rest/api/storageservices/put-block-from-url) API along with [Put Block List](/rest/api/storageservices/put-block-list).
+
+To learn more about the `Put Blob From URL` operation, including blob size limitations and billing considerations, see [Put Blob From URL remarks](/rest/api/storageservices/put-blob-from-url#remarks).
+
+## Copy a blob from a source object URL
+
+This section gives an overview of methods provided by the Azure Storage client library for .NET to perform a copy operation from a source object URL.
+
+The following methods wrap the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) REST API operation, and create a new block blob where the contents of the blob are read from a given URL:
+
+- [SyncUploadFromUri](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuri)
+- [SyncUploadFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuriasync)
+
+These methods are preferred for scenarios where you want to move data into a storage account and have a URL for the source object.
+
+For large objects, you may choose to work with individual blocks. The following methods wrap the [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operation. These methods create a new block to be committed as part of a blob where the contents are read from a source URL:
+
+- [StageBlockFromUri](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.stageblockfromuri)
+- [StageBlockFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.stageblockfromuriasync)
+
+## Copy a blob within the same storage account
+
+If you're copying a blob within the same storage account, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key.
+
+The following example shows a scenario for copying a source blob within the same storage account. The [SyncUploadFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuriasync) method can optionally accept a Boolean parameter to indicate whether an existing blob should be overwritten, as shown in the example. The `overwrite` parameter defaults to false.
++
+The [SyncUploadFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuriasync) method can also accept a [BlobSyncUploadFromUriOptions](/dotnet/api/azure.storage.blobs.models.blobsyncuploadfromurioptions) parameter to specify further options for the operation.
+
+## Copy a blob from another storage account
+
+If the source is a blob in another storage account, the source blob must either be public, or authorized via Azure AD or SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
+
+The following example shows a scenario for copying a blob from another storage account. In this example, we create a source blob URI with an appended *service SAS token* by calling [GenerateSasUri](/dotnet/api/azure.storage.blobs.blobcontainerclient.generatesasuri) on the blob client. To use this method, the source blob client needs to be authorized via account key.
++
+If you already have a SAS token, you can construct the URI for the source blob as follows:
+
+```csharp
+// Append the SAS token to the URI - include ? before the SAS token
+var sourceBlobSASURI = new Uri(
+ $"https://{srcAccountName}.blob.core.windows.net/{srcContainerName}/{srcBlobName}?{sasToken}");
+```
+
+You can also [create a user delegation SAS token with .NET](storage-blob-user-delegation-sas-create-dotnet.md). User delegation SAS tokens offer greater security, as they're signed with Azure AD credentials instead of an account key.
+
+## Copy a blob from a source outside of Azure
+
+You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
++
+## Resources
+
+To learn more about copying blobs using the Azure Blob Storage client library for .NET, see the following resources.
+
+### REST API operations
+
+The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods covered in this article use the following REST API operations:
+
+- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/PutBlobFromURL.cs)
+
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Title: Copy a blob with .NET
-description: Learn how to copy a blob in Azure Storage by using the .NET client library.
+description: Learn how to copy blobs in Azure Storage using the .NET client library.
Previously updated : 03/14/2023 Last updated : 04/14/2023
# Copy a blob with .NET
-This article shows how to copy a blob in a storage account using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). It also shows how to abort an asynchronous copy operation.
+This article provides an overview of copy operations using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
-## About copying blobs
+Copy operations can be used to move data within a storage account, between storage accounts, or into a storage account from a source outside of Azure. When using the Blob Storage client libraries to copy data resources, it's important to understand the REST API operations behind the client library methods. The following table lists REST API operations that can be used to copy data resources to a storage account. The table also includes links to detailed guidance about how to perform these operations using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
-A copy operation can perform any of the following actions:
+| REST API operation | When to use | Client library methods | Guidance |
+| | | | |
+| [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) | This operation is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. This operation completes synchronously. | [SyncUploadFromUri](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuri)<br>[SyncUploadFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.syncuploadfromuriasync) | [Copy a blob from a source object URL with .NET](storage-blob-copy-url-dotnet.md) |
+| [Put Block From URL](/rest/api/storageservices/put-block-from-url) | For large objects, you can use [Put Block From URL](/rest/api/storageservices/put-block-from-url) to write individual blocks to Blob Storage, and then call [Put Block List](/rest/api/storageservices/put-block-list) to commit those blocks to a block blob. This operation completes synchronously. | [StageBlockFromUri](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.stageblockfromuri)<br>[StageBlockFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.stageblockfromuriasync) | [Copy a blob from a source object URL with .NET](storage-blob-copy-url-dotnet.md) |
+| [Copy Blob](/rest/api/storageservices/copy-blob) | This operation can be used when you want asynchronous scheduling for a copy operation. | [StartCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuri)<br>[StartCopyFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuriasync) | [Copy a blob with asynchronous scheduling using .NET](storage-blob-copy-async-dotnet.md) |
-- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or can be a new blob created by the copy operation.-- Copy a source blob to a destination blob with the same name, effectively replacing the destination blob. Such a copy operation removes any uncommitted blocks and overwrites the destination blob's metadata.-- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported.-- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob.-- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot.
+For append blobs, you can use the [Append Block From URL](/rest/api/storageservices/append-block-from-url) operation to commit a new block of data to the end of an existing append blob. The following client library methods wrap this operation:
-The source blob for a copy operation may be one of the following types:
-- Block blob-- Append blob-- Page blob-- Blob snapshot-- Blob version
+- [AppendBlockFromUri](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.appendblockfromuri)
+- [AppendBlockFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.appendblockfromuriasync)
-If the destination blob already exists, it must be of the same blob type as the source blob, and the existing destination blob will be overwritten. The destination blob can't be modified while a copy operation is in progress, and a destination blob can only have one outstanding copy operation.
+For page blobs, you can use the [Put Page From URL](/rest/api/storageservices/put-page-from-url) operation to write a range of pages to a page blob where the contents are read from a URL. The following client library methods wrap this operation:
-The entire source blob or file is always copied. Copying a range of bytes or set of blocks isn't supported. When a blob is copied, its system properties are copied to the destination blob with the same values.
+- [UploadPagesFromUri](/dotnet/api/azure.storage.blobs.specialized.pageblobclient.uploadpagesfromuri)
+- [UploadPagesFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.pageblobclient.uploadpagesfromuriasync)
-## Copy a blob
+## Client library resources
-To copy a blob, call one of the following methods:
--- [StartCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuri)-- [StartCopyFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.startcopyfromuriasync)-
-The `StartCopyFromUri` and `StartCopyFromUriAsync` methods return a [CopyFromUriOperation](/dotnet/api/azure.storage.blobs.models.copyfromurioperation) object containing information about the copy operation.
-
-The following code example gets a [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) representing an existing blob and copies it to a new blob in a different container within the same storage account.
-
-```csharp
-public static async Task CopyBlobAsync(BlobServiceClient blobServiceClient)
-{
- // Instantiate BlobClient for the source blob and destination blob
- BlobClient sourceBlob = blobServiceClient
- .GetBlobContainerClient("source-container")
- .GetBlobClient("sample-blob.txt");
- BlobClient destinationBlob = blobServiceClient
- .GetBlobContainerClient("destination-container")
- .GetBlobClient("sample-blob.txt");
-
- // Start the copy operation and wait for it to complete
- CopyFromUriOperation copyOperation = await destinationBlob.StartCopyFromUriAsync(sourceBlob.Uri);
- await copyOperation.WaitForCompletionAsync();
-}
-```
-
-To check the status of a copy operation, you can call [UpdateStatusAsync](/dotnet/api/azure.storage.blobs.models.copyfromurioperation.updatestatusasync#azure-storage-blobs-models-copyfromurioperation-updatestatusasync(system-threading-cancellationtoken)) and parse the response to get the value for the `x-ms-copy-status` header.
-
-The following code example shows how to check the status of a given copy operation:
--
-## Abort a copy operation
-
-Aborting a copy operation results in a destination blob of zero length. However, the metadata for the destination blob will have the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods.
-
-To abort a pending copy operation, call one of the following operations:
-- [AbortCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.abortcopyfromuri)-- [AbortCopyFromUriAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.abortcopyfromuriasync)-
-The following code example shows how to abort a pending copy operation:
--
-## Resources
-
-To learn more about copying blobs using the Azure Blob Storage client library for .NET, see the following resources.
-
-### REST API operations
-
-The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for copying blobs use the following REST API operations:
--- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)-- [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) (REST API)-- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)-
+- [Client library reference documentation](/dotnet/api/azure.storage.blobs)
+- [Client library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs)
+- [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs)
storage Upgrade To Data Lake Storage Gen2 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
Previously updated : 09/23/2022 Last updated : 04/13/2023
To learn more about these capabilities and evaluate the impact of this upgrade o
You're account might be configured to use features that aren't yet supported in Data Lake Storage Gen2 enabled accounts. If your account is using a feature that isn't yet supported, the upgrade will not pass the validation step. Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to identify unsupported features. If you're using any of those unsupported features in your account, make sure to disable them before you begin the upgrade.
+ > [!NOTE]
+ > Blob soft delete is not yet supported by the upgrade process. Make sure to disable blob soft delete and then allow all soft-delete blobs to expire before you upgrade the account.
+ 2. Ensure that the segments of each blob path are named The migration process creates a directory for each path segment of a blob. Data Lake Storage Gen2 directories must have a name so for migration to succeed, each path segment in a virtual directory must have a name. The same requirement is true for segments that are named only with a space character. If any path segments are either unnamed (`//`) or named only with a space character (`_`), then before you proceed with the migration, you must copy those blobs to a new path that is compatible with these naming requirements.
To learn more about these capabilities and evaluate the impact of this upgrade o
> [!div class="mx-imgBorder"] > ![Error json page](./media/upgrade-to-data-lake-storage-gen2-how-to/error-json.png)
- Open the downloaded file to determine why the account did not pass the validation step. If you have a Blob Storage feature that is fully supported, but which in Data Lake Storage Gen2 is supported only at the preview level or is not yet supported, validation might fail. To see how each Blob Storage feature is supported with Data Lake Storage Gen2, see [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md).
-
- The following JSON indicates that an incompatible feature is enabled on the account. In this case, you would disable the feature and then start the validation process again.
+ Open the downloaded file to determine why the account did not pass the validation step. The following JSON indicates that an incompatible feature is enabled on the account. In this case, you would disable the feature and then start the validation process again.
```json {
storage Versions Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versions-manage-dotnet.md
The following code example shows how to list blob versions.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ListBlobVersions":::
-## See also
+## Copy a previous blob version over the base blob
+
+You can perform a copy operation to promote a version over its base blob, as long as the base blob is in an online tier (hot or cool). The version remains, but its destination is overwritten with a copy that can be read and written to.
+
+The following code example shows how to copy a blob version over the base blob:
++
+## Resources
+
+To learn more about managing blob versions using the Azure Blob Storage client library for .NET, see the following resources.
++
+### See also
- [Blob versioning](versioning-overview.md) - [Enable and manage blob versioning](versioning-enable.md)
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-troubleshooting.md
- Title: Azure Storage Explorer troubleshooting guide
-description: Overview of debugging techniques for Azure Storage Explorer
----- Previously updated : 07/28/2020---
-# Azure Storage Explorer troubleshooting guide
-
-Microsoft Azure Storage Explorer is a standalone app that makes it easy to work with Azure Storage data on Windows, macOS, and Linux. The app can connect to storage accounts hosted on Azure, national clouds, and Azure Stack.
-
-This guide summarizes solutions for issues that are commonly seen in Storage Explorer.
-
-## Azure RBAC permissions issues
-
-Azure role-based access control [(Azure RBAC)](../../role-based-access-control/overview.md) enables highly granular access management of Azure resources by combining sets of permissions into *roles*. Here are some strategies to get Azure RBAC working optimally in Storage Explorer.
-
-### How do I access my resources in Storage Explorer?
-
-If you're having problems accessing storage resources through Azure RBAC, you might not have been assigned the appropriate roles. The following sections describe the permissions Storage Explorer currently requires for access to your storage resources. Contact your Azure account admin if you're not sure you have the appropriate roles or permissions.
-
-#### "Read: List/Get Storage Account(s)" permissions issue
-
-You must have permission to list storage accounts. To get this permission, you must be assigned the *Reader* role.
-
-#### List storage account keys
-
-Storage Explorer can also use account keys to authenticate requests. You can get access to account keys through more powerful roles, such as the *Contributor* role.
-
-> [!NOTE]
-> Access keys grant unrestricted permissions to anyone who holds them. As a result, we don't recommend that you hand out these keys to account users. If you need to revoke access keys, you can regenerate them from the [Azure portal](https://portal.azure.com/).
-
-#### Data roles
-
-You must be assigned at least one role that grants access to read data from resources. For example, if you want to list or download blobs, you'll need at least the *Storage Blob Data Reader* role.
-
-### Why do I need a management layer role to see my resources in Storage Explorer?
-
-Azure Storage has two layers of access: *management* and *data*. Subscriptions and storage accounts are accessed through the management layer. Containers, blobs, and other data resources are accessed through the data layer. For example, if you want to get a list of your storage accounts from Azure, you send a request to the management endpoint. If you want a list of blob containers in an account, you send a request to the appropriate service endpoint.
-
-Azure roles can grant you permissions for management or data layer access. The Reader role, for example, grants read-only access to management layer resources.
-
-Strictly speaking, the Reader role provides no data layer permissions and isn't necessary for accessing the data layer.
-
-Storage Explorer makes it easy to access your resources by gathering the necessary information to connect to your Azure resources. For example, to display your blob containers, Storage Explorer sends a "list containers" request to the blob service endpoint. To get that endpoint, Storage Explorer searches the list of subscriptions and storage accounts you have access to. To find your subscriptions and storage accounts, Storage Explorer also needs access to the management layer.
-
-If you don't have a role that grants any management layer permissions, Storage Explorer can't get the information it needs to connect to the data layer.
-
-### What if I can't get the management layer permissions I need from my admin?
-
-If you want to access blob containers, Azure Data Lake Storage Gen2 containers or directories, or queues, you can attach to those resources by using your Azure credentials.
-
-1. Open the **Connect** dialog.
-1. Select the resource type you want to connect to.
-1. Select **Sign in using Azure Active Directory (Azure AD)** and select **Next**.
-1. Select the user account and tenant associated with the resource you're attaching to. Select **Next**.
-1. Enter the URL to the resource, and enter a unique display name for the connection. Select **Next** and then select **Connect**.
-
-For other resource types, we don't currently have an Azure RBAC-related solution. As a workaround, you can request a shared access signature URL and then attach to your resource:
-
-1. Open the **Connect** dialog.
-1. Select the resource type you want to connect to.
-1. Select **Shared access signature (SAS)** and select **Next**.
-1. Enter the shared access signature URL you received and enter a unique display name for the connection. Select **Next** and then select **Connect**.
-
-For more information on how to attach to resources, see [Attach to an individual resource](../../vs-azure-tools-storage-manage-with-storage-explorer.md#attach-to-an-individual-resource).
-
-### Recommended Azure built-in roles
-
-There are several Azure built-in roles that can provide the permissions needed to use Storage Explorer. Some of those roles are:
--- [Owner](../../role-based-access-control/built-in-roles.md#owner): Manage everything, including access to resources.-- [Contributor](../../role-based-access-control/built-in-roles.md#contributor): Manage everything, excluding access to resources.-- [Reader](../../role-based-access-control/built-in-roles.md#reader): Read and list resources.-- [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor): Full management of storage accounts.-- [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner): Full access to Azure Storage blob containers and data.-- [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor): Read, write, and delete Azure Storage containers and blobs.-- [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader): Read and list Azure Storage containers and blobs.-
-> [!NOTE]
-> The Owner, Contributor, and Storage Account Contributor roles grant account key access.
-
-## SSL certificate issues
-
-This section discusses SSL certificate issues.
-
-### Understand SSL certificate issues
-
-Make sure you've read the [SSL certificates section](./storage-explorer-network.md#ssl-certificates) in the Storage Explorer networking documentation before you continue.
-
-### Use system proxy
-
-If you're only using features that support the **use system proxy** setting, try using that setting. To read more about the **system proxy** setting, see [Network connections in Storage Explorer](./storage-explorer-network.md#use-system-proxy-preview).
-
-### Import SSL certificates
-
-If you have a copy of the self-signed certificates, you can instruct Storage Explorer to trust them:
-
-1. Obtain a Base-64 encoded X.509 (.cer) copy of the certificate.
-1. Go to **Edit** > **SSL Certificates** > **Import Certificates**. Then use the file picker to find, select, and open the .cer file.
-
-This issue might also occur if there are multiple certificates (root and intermediate). To fix this error, all certificates must be imported.
-
-### Find SSL certificates
-
-If you don't have a copy of the self-signed certificates, talk to your IT admin for help.
-
-Follow these steps to find them:
-
-1. Install OpenSSL:
-
- - [Windows](https://slproweb.com/products/Win32OpenSSL.html): Any of the light versions should be sufficient.
- - Mac: Should be included with your operating system.
- - Linux: Should be included with your operating system.
-
-1. Run OpenSSL:
-
- - Windows: Open the installation directory, select **/bin/**, and then double-click **openssl.exe**.
- - Mac: Run `openssl` from a terminal.
- - Linux: Run `openssl` from a terminal.
-
-1. Run the command `openssl s_client -showcerts -connect <hostname>:443` for any of the Microsoft or Azure host names that your storage resources are behind. For more information, see this [list of host names that are frequently accessed by Storage Explorer](./storage-explorer-network.md).
-
-1. Look for self-signed certificates. If the subject `("s:")` and issuer `("i:")` are the same, the certificate is most likely self-signed.
-
-1. When you find the self-signed certificates, for each one, copy and paste everything from, and including, `--BEGIN CERTIFICATE--` to `--END CERTIFICATE--` into a new .cer file.
-
-1. Open Storage Explorer and go to **Edit** > **SSL Certificates** > **Import Certificates**. Then use the file picker to find, select, and open the .cer files you created.
-
-### Disable SSL certificate validation
-
-If you can't find any self-signed certificates by following these steps, contact us through the feedback tool. You can also open Storage Explorer from the command line with the `--ignore-certificate-errors` flag. When opened with this flag, Storage Explorer ignores certificate errors. *This flag isn't recommended.*
-
-## Sign-in issues
-
-This section discusses sign-in issues you might encounter.
-
-### Understand sign-in
-
-Make sure you've read the [Sign in to Storage Explorer](./storage-explorer-sign-in.md) documentation before you continue.
-
-### Frequently having to reenter credentials
-
-Having to reenter credentials is most likely the result of Conditional Access policies set by your Azure Active Directory (Azure AD) admin. When Storage Explorer asks you to reenter credentials from the account panel, you should see an **Error details** link. Select it to see why Storage Explorer is asking you to reenter credentials. Conditional Access policy errors that require reentering of credentials might look something like these:
--- The refresh token has expired.-- You must use multifactor authentication to access.-- Your admin made a configuration change.-
-To reduce the frequency of having to reenter credentials because of errors like the preceding ones, you'll need to talk to your Azure AD admin.
-
-### Conditional access policies
-
-If you have conditional access policies that need to be satisfied for your account, make sure you're using the **Default Web Browser** value for the **Sign in with** setting. For information on that setting, see [Changing where sign-in happens](./storage-explorer-sign-in.md#changing-where-sign-in-happens).
-
-### Browser complains about HTTP redirect or insecure connection during sign-in
-
-When Storage Explorer performs sign-in in your web browser, a redirect to `localhost` is done at the end of the sign-in process. Browsers sometimes raise a warning or error that the redirect is being performed with HTTP instead of HTTPS. Some browsers might also try to force the redirect to be performed with HTTPS. If either of these issues happen, depending on your browser, you have options:
--- Ignore the warning.-- Add an exception for `localhost`.-- Disable force HTTPS, either globally or just for `localhost`.-
-If you can't do any of those options, you can also [change where sign-in happens](./storage-explorer-sign-in.md#changing-where-sign-in-happens) to integrated sign-in to avoid using your browser altogether.
-
-### Unable to acquire token, tenant is filtered out
-
-Sometimes you may see an error message that says a token can't be acquired because a tenant is filtered out. This means you're trying to access a resource that's in a tenant you filtered out. To include the tenant, go to the **Account Panel**. Make sure the checkbox for the tenant specified in the error is selected. For more information on filtering tenants in Storage Explorer, see [Managing accounts](./storage-explorer-sign-in.md#managing-accounts).
-
-### Authentication library failed to start properly
-
-If on startup you see an error message that says Storage Explorer's authentication library failed to start properly, make sure your installation environment meets all [prerequisites](../../vs-azure-tools-storage-manage-with-storage-explorer.md#prerequisites). Not meeting prerequisites is the most likely cause of this error message.
-
-If you believe that your installation environment meets all prerequisites, [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues/new). When you open your issue, make sure to include:
--- Your OS-- What version of Storage Explorer you're trying to use-- Whether you checked the prerequisites-- [Authentication logs](#authentication-logs) from an unsuccessful launch of Storage Explorer. Verbose authentication logging is automatically enabled after this type of error occurs.-
-### Blank window when you use integrated sign-in
-
-If you chose to use **Integrated Sign-in** and you're seeing a blank sign-in window, you'll likely need to switch to a different sign-in method. Blank sign-in dialog boxes most often occur when an Active Directory Federation Services server prompts Storage Explorer to perform a redirect that's unsupported by Electron.
-
-To change to a different sign-in method, change the **Sign in with** setting under **Settings** > **Application** > **Sign-in**. For information on the different types of sign-in methods, see [Changing where sign in happens](./storage-explorer-sign-in.md#changing-where-sign-in-happens).
-
-### Reauthentication loop or UPN change
-
-If you're in a reauthentication loop or have changed the UPN of one of your accounts, try these steps:
-
-1. Open Storage Explorer.
-1. Go to **Help** > **Reset**.
-1. Make sure at least **Authentication** is selected. Clear other items you don't want to reset.
-1. Select **Reset**.
-1. Restart Storage Explorer and try to sign in again.
-
-If you continue to have issues after you do a reset, try these steps:
-
-1. Open Storage Explorer.
-1. Remove all accounts and then close Storage Explorer.
-1. Delete the *.IdentityService* folder from your machine. On Windows, the folder is located at *C:\users\\<username\>\AppData\Local*. For Mac and Linux, you can find the folder at the root of your user directory.
-1. If you're running Mac or Linux, you also need to delete the Microsoft.Developer.IdentityService entry from your operating system's keystore. On the Mac, the keystore is the Gnome Keychain application. In Linux, the application is typically called *Keyring*, but the name might differ depending on your distribution.
-1. Restart Storage Explorer and try to sign in again.
-
-### macOS: Keychain errors or no sign-in window
-
-macOS Keychain can sometimes enter a state that causes issues for the Storage Explorer authentication library. To get Keychain out of this state:
-
-1. Close Storage Explorer.
-1. Open Keychain by selecting **Command+Spacebar**, enter **keychain**, and select **Enter**.
-1. Select the **login** keychain.
-1. Select the **padlock** to lock the keychain. After the process is finished, the **padlock** appears locked. It might take a few seconds, depending on what apps you have open.
-
- ![Screenshot that shows the padlock.](./media/storage-explorer-troubleshooting/unlockingkeychain.png)
-
-1. Open Storage Explorer.
-1. You're prompted with a message like "Service hub wants to access the Keychain." Enter your Mac admin account password and select **Always Allow**. Or select **Allow** if **Always Allow** isn't available.
-1. Try to sign in.
-
-### Default browser doesn't open
-
-If your default browser doesn't open when you try to sign in, try all of the following techniques:
--- Restart Storage Explorer.-- Open your browser manually before you start to sign in.-- Try using **Integrated Sign-In**. For instructions, see [Changing where sign-in happens](./storage-explorer-sign-in.md#changing-where-sign-in-happens).-
-### Other sign-in issues
-
-If none of the preceding instructions apply to your sign-in issue or if they fail to resolve your sign-in issue, [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).
-
-### Missing subscriptions and broken tenants
-
-If you can't retrieve your subscriptions after you successfully sign in, try the following troubleshooting methods:
--- Verify that your account has access to the subscriptions you expect. You can verify your access by signing in to the portal for the Azure environment you're trying to use.-- Make sure you've signed in through the correct Azure environment like Azure, Azure China 21Vianet, Azure Germany, Azure US Government, or Custom Environment.-- If you're behind a proxy server, make sure you configured the Storage Explorer proxy correctly.-- Try removing and adding back the account.-- If there's a "More information" or "Error details" link, check which error messages are being reported for the tenants that are failing. If you aren't sure how to respond to the error messages, [open an issue in GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).-
-## Problem interacting with your OS credential store during an AzCopy transfer
-
-If you see this message on Windows, most likely the Windows Credential Manager is full. To make room in the Windows Credential Manager
-
-1. Close Storage Explorer
-1. On the **Start** menu, search for **Credential Manager** and open it.
-1. Go to **Windows Credentials**.
-1. Under **Generic Credentials**, look for entries associated with programs you no longer use and delete them. You can also look for entries like `azcopy/aadtoken/<some number>` and delete those entries.
-
-If the message continues to appear after completing the above steps, or if you encounter this message on platforms other than Windows, you can [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).
-
-## Can't remove an attached storage account or resource
-
-If you can't remove an attached account or storage resource through the UI, you can manually delete all attached resources by deleting the following folders:
--- Windows: *%AppData%/StorageExplorer*-- macOS: */Users/<your_name>/Library/Application Support/StorageExplorer*-- Linux: *~/.config/StorageExplorer*-
-Close Storage Explorer before you delete these folders.
-
-> [!NOTE]
-> If you've ever imported any SSL certificates, back up the contents of the *certs* directory. Later, you can use the backup to reimport your SSL certificates.
-
-## Proxy issues
-
-Storage Explorer supports connecting to Azure Storage resources via a proxy server. If you experience any issues when you connect to Azure via proxy, here are some suggestions.
-
-Storage Explorer only supports basic authentication with proxy servers. Other authentication methods, such as NTLM, aren't supported.
-
-> [!NOTE]
-> Storage Explorer doesn't support proxy autoconfig files for configuring proxy settings.
-
-### Verify Storage Explorer proxy settings
-
-The **Application** > **Proxy** > **Proxy configuration** setting determines which source Storage Explorer gets the proxy configuration from.
-
-If you select **Use environment variables**, make sure to set the `HTTPS_PROXY` or `HTTP_PROXY` environment variables. Environment variables are case sensitive, so be sure to set the correct variables. If these variables are undefined or invalid, Storage Explorer won't use a proxy. Restart Storage Explorer after you modify any environment variables.
-
-If you select **Use app proxy settings**, make sure the in-app proxy settings are correct.
-
-### Steps for diagnosing issues
-
-If you're still experiencing issues, try these troubleshooting methods:
-
-1. If you can connect to the internet without using your proxy, verify that Storage Explorer works without proxy settings enabled. If Storage Explorer connects successfully, there might be an issue with your proxy server. Work with your admin to identify the problems.
-1. Verify that other applications that use the proxy server work as expected.
-1. Verify that you can connect to the portal for the Azure environment you're trying to use.
-1. Verify that you can receive responses from your service endpoints. Enter one of your endpoint URLs into your browser. If you can connect, you should receive an `InvalidQueryParameterValue` or similar XML response.
-1. Check whether someone else using Storage Explorer with the same proxy server can connect. If they can, you might have to contact your proxy server admin.
-
-### Tools for diagnosing issues
-
-A networking tool, such as Fiddler, can help you diagnose problems.
-
-1. Configure your networking tool as a proxy server running on the local host. If you have to continue working behind an actual proxy, you might have to configure your networking tool to connect through the proxy.
-1. Check the port number used by your networking tool.
-1. Configure Storage Explorer proxy settings to use the local host and the networking tool's port number, such as "localhost:8888".
-
-When set correctly, your networking tool will log network requests made by Storage Explorer to management and service endpoints.
-
-If your networking tool doesn't appear to be logging Storage Explorer traffic, try testing your tool with a different application. For example, enter the endpoint URL for one of your storage resources, such as `https://contoso.blob.core.windows.net/`) in a web browser. You'll receive a response similar to this code sample.
-
- ![Code sample.](./media/storage-explorer-troubleshooting/4022502_en_2.png)
-
- The response suggests the resource exists, even though you can't access it.
-
-If your networking tool only shows traffic from other applications, you might need to adjust the proxy settings in Storage Explorer. Otherwise, you might need to adjust your tool's settings.
-
-### Contact proxy server admin
-
-If your proxy settings are correct, you might have to contact your proxy server admin to:
--- Make sure your proxy doesn't block traffic to Azure management or resource endpoints.-- Verify the authentication protocol used by your proxy server. Storage Explorer only supports basic authentication protocols. Storage Explorer doesn't support NTLM proxies.-
-## "Unable to Retrieve Children" error message
-
-If you're connected to Azure through a proxy, verify that your proxy settings are correct.
-
-If the owner of a subscription or account has granted you access to a resource, verify that you have read or list permissions for that resource.
-
-## Connection string doesn't have complete configuration settings
-
-If you receive this error message, it's possible that you don't have the necessary permissions to obtain the keys for your storage account. To confirm, go to the portal and locate your storage account. Right-click the node for your storage account and select **Open in Portal**. Then, go to the **Access Keys** pane. If you don't have permissions to view keys, you'll see a "You don't have access" message. To work around this issue, you can obtain either an account name and key or an account shared access signature and use it to attach the storage account.
-
-If you do see the account keys, file an issue in GitHub so that we can help you resolve the issue.
-
-## "Error occurred while adding new connection: TypeError: Cannot read property 'version' of undefined"
-
-If you receive this error message when you try to add a custom connection, the connection data that's stored in the local credential manager might be corrupted. To work around this issue, try deleting and adding back your corrupted local connections:
-
-1. Start Storage Explorer. From the menu, go to **Help** > **Toggle Developer Tools**.
-1. In the opened window, on the **Application** tab, go to **Local Storage** > **file://** on the left side.
-1. Depending on the type of connection you're having an issue with, look for its key. Then copy its value into a text editor. The value is an array of your custom connection names, such as:
-
- - Storage accounts
- - `StorageExplorer_CustomConnections_Accounts_v1`
- - Blob containers
- - `StorageExplorer_CustomConnections_Blobs_v1`
- - `StorageExplorer_CustomConnections_Blobs_v2`
- - File shares
- - `StorageExplorer_CustomConnections_Files_v1`
- - Queues
- - `StorageExplorer_CustomConnections_Queues_v1`
- - Tables
- - `StorageExplorer_CustomConnections_Tables_v1`
-
-1. After you save your current connection names, set the value in **Developer Tools** to `[]`.
-
-To preserve the connections that aren't corrupted, use the following steps to locate the corrupted connections. If you don't mind losing all existing connections, skip these steps and follow the platform-specific instructions to clear your connection data.
-
-1. From a text editor, add back each connection name to **Developer Tools**. Then check whether the connection is still working.
-1. If a connection is working correctly, it's not corrupted; you can safely leave it there. If a connection isn't working, remove its value from **Developer Tools**, and record it so that you can add it back later.
-1. Repeat until you've examined all your connections.
-
-After removing connection names, you must clear their corrupted data. Then you can add the connections back by using the standard connect steps in Storage Explorer.
-
-# [Windows](#tab/Windows)
-
-1. On the **Start** menu, search for **Credential Manager** and open it.
-1. Go to **Windows Credentials**.
-1. Under **Generic Credentials**, look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key. An example is `StorageExplorer_CustomConnections_Accounts_v1/account1`.
-1. Delete and add back these connections.
-
-# [macOS](#tab/macOS)
-
-1. Open Spotlight by selecting **Command+Space** and search for **Keychain access**.
-1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key. An example is `StorageExplorer_CustomConnections_Accounts_v1/account1`.
-1. Delete and add back these connections.
-
-# [Ubuntu](#tab/linux-ubuntu)
-
-Local credential management varies depending on your system configuration. If your system doesn't have a tool for local credential management installed, you may install a third-party tool compatible with `libsecret` to manage your local credentials. For example, on systems using GNOME, you can install [Seahorse](https://wiki.gnome.org/Apps/Seahorse/).
-
-1. Open your local credential management tool. Find your saved credentials.
-1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key (for example `StorageExplorer_CustomConnections_Accounts_v1/account1`)
-1. Delete and add back these connections.
-
-# [Red Hat Enterprise Linux](#tab/linux-rhel)
-
-Local credential management varies depending on your system configuration. If your system doesn't have a tool for local credential management installed, you may install a third-party tool compatible with `libsecret` to manage your local credentials. For example, on systems using GNOME, you can install [Seahorse](https://wiki.gnome.org/Apps/Seahorse/).
-
-1. Open your local credential management tool. Find your saved credentials.
-1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key (for example `StorageExplorer_CustomConnections_Accounts_v1/account1`)
-1. Delete and add back these connections.
-
-# [SUSE Linux Enterprise Server](#tab/linux-sles)
-
-> [!NOTE]
-> Storage Explorer has not been tested for SLES. You may try using Storage Explorer on your system, but we cannot guarantee that Storage Explorer will work as expected.
-
-Local credential management varies depending on your system configuration. If your system doesn't have a tool for local credential management installed, you may install a third-party tool compatible with `libsecret` to manage your local credentials. For example, on systems using GNOME, you can install [Seahorse](https://wiki.gnome.org/Apps/Seahorse/).
-
-1. Open your local credential management tool. Find your saved credentials.
-1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key (for example `StorageExplorer_CustomConnections_Accounts_v1/account1`)
-1. Delete and add back these connections.
---
-If you still encounter this error after you run these steps, or if you want to share what you suspect has corrupted the connections, [open an issue](https://github.com/microsoft/AzureStorageExplorer/issues) on our GitHub page.
-
-## Issues with a shared access signature URL
-
-If you connect to a service through a shared access signature URL and experience an error:
--- Verify that the URL provides the necessary permissions to read or list resources.-- Verify that the URL hasn't expired.-- If the shared access signature URL is based on an access policy, verify that the access policy hasn't been revoked.-
-If you accidentally attached by using an invalid shared access signature URL and now can't detach, follow these steps:
-
-1. When you're running Storage Explorer, select **F12** to open the **Developer Tools** window.
-1. On the **Application** tab, select **Local Storage** > **file://** on the left side.
-1. Find the key associated with the service type of the shared access signature URI. For example, if the bad shared access signature URI is for a blob container, look for the key named `StorageExplorer_AddStorageServiceSAS_v1_blob`.
-1. The value of the key should be a JSON array. Find the object associated with the bad URI, and delete it.
-1. Select **Ctrl+R** to reload Storage Explorer.
-
-## Storage Explorer dependencies
-
-# [Windows](#tab/Windows)
-
-Storage Explorer comes packaged with all dependencies it needs to run on Windows.
-
-# [macOS](#tab/macOS)
-
-Storage Explorer comes packaged with all dependencies it needs to run on macOS.
-
-# [Ubuntu](#tab/linux-ubuntu)
-
-### Snap
-
-Storage Explorer 1.10.0 and later is available as a snap from the Snap Store. The Storage Explorer snap installs all its dependencies automatically. It's updated when a new version of the snap is available. Installing the Storage Explorer snap is the recommended method of installation.
-
-Storage Explorer requires the use of a password manager, which you might need to connect manually before Storage Explorer will work correctly. You can connect Storage Explorer to your system's password manager by running the following command:
-
-```bash
-snap connect storage-explorer:password-manager-service :password-manager-service
-```
-
-### .tar.gz file
-
-You can also download the application as a *.tar.gz* file, but you'll have to install dependencies manually.
-
-Storage Explorer requires the [.NET 6 runtime](/dotnet/core/install/linux) to be installed on your system. The ASP.NET runtime is *not* required.
-
-> [!NOTE]
-> Older versions of Storage Explorer may require a different version of .NET or .NET Core. Refer to release notes or in-app error messages to help determine the required version.
-
-Many libraries needed by Storage Explorer come preinstalled with Canonical's standard installations of Ubuntu. Custom environments might be missing some of these libraries. If you have issues launching Storage Explorer, make sure the following packages are installed on your system:
--- iproute2-- libasound2-- libatm1-- libgconf-2-4-- libnspr4-- libnss3-- libpulse0-- libsecret-1-0-- libx11-xcb1-- libxss1-- libxtables11-- libxtst6-- xdg-utils-
-# [Red Hat Enterprise Linux](#tab/linux-rhel)
-
-### Snap
-
-Storage Explorer 1.10.0 and later is available as a snap from the Snap Store. The Storage Explorer snap installs all its dependencies automatically. It's updated when a new version of the snap is available. Installing the Storage Explorer snap is the recommended method of installation.
-
-Storage Explorer requires the use of a password manager, which you might need to connect manually before Storage Explorer will work correctly. You can connect Storage Explorer to your system's password manager by running the following command:
-
-```bash
-snap connect storage-explorer:password-manager-service :password-manager-service
-```
-
-### .tar.gz file
-
-> [!NOTE]
-> Storage Explorer as provided in the *.tar.gz* download is supported for Ubuntu only. Storage Explorer might work on RHEL, but it is not officially supported.
-
-You can also download the application as a *.tar.gz* file, but you'll have to install dependencies manually.
-
-Storage Explorer requires the [.NET 6 runtime](/dotnet/core/install/linux) to be installed on your system. The ASP.NET runtime is *not* required.
-
-> [!NOTE]
-> Older versions of Storage Explorer may require a different version of .NET or .NET Core. Refer to release notes or in-app error messages to help determine the required version.
-
-Many libraries needed by Storage Explorer may be missing in RHEL environments. If you have issues launching Storage Explorer, make sure the following packages (or their RHEL equivalents) are installed on your system:
--- iproute2-- libasound2-- libatm1-- libgconf-2-4-- libnspr4-- libnss3-- libpulse0-- libsecret-1-0-- libx11-xcb1-- libxss1-- libxtables11-- libxtst6-- xdg-utils-
-# [SUSE Linux Enterprise Server](#tab/linux-sles)
-
-> [!NOTE]
-> Storage Explorer has not been tested for SLES. You may try using Storage Explorer on your system, but we cannot guarantee that Storage Explorer will work as expected.
-
-### Snap
-
-Storage Explorer 1.10.0 and later is available as a snap from the Snap Store. The Storage Explorer snap installs all its dependencies automatically. It's updated when a new version of the snap is available. Installing the Storage Explorer snap is the recommended method of installation.
-
-Storage Explorer requires the use of a password manager, which you might need to connect manually before Storage Explorer will work correctly. You can connect Storage Explorer to your system's password manager by running the following command:
-
-```bash
-snap connect storage-explorer:password-manager-service :password-manager-service
-```
-
-### .tar.gz file
-
-You can also download the application as a *.tar.gz* file, but you'll have to install dependencies manually.
-
-Storage Explorer requires the [.NET 6 runtime](/dotnet/core/install/linux) to be installed on your system. The ASP.NET runtime is *not* required.
-
-> [!NOTE]
-> Older versions of Storage Explorer may require a different version of .NET or .NET Core. Refer to release notes or in-app error messages to help determine the required version.
-
-Many libraries needed by Storage Explorer may be missing in SLES environments. If you have issues launching Storage Explorer, make sure the following packages (or their SLES equivalents) are installed on your system:
--- iproute2-- libasound2-- libatm1-- libgconf-2-4-- libnspr4-- libnss3-- libpulse0-- libsecret-1-0-- libx11-xcb1-- libxss1-- libxtables11-- libxtst6-- xdg-utils---
-### Patch Storage Explorer for newer versions of .NET Core
-
-For Storage Explorer 1.7.0 or earlier, you might have to patch the version of .NET Core used by Storage Explorer:
-
-1. Download version 1.5.43 of StreamJsonRpc [from NuGet](https://www.nuget.org/packages/StreamJsonRpc/1.5.43).1. Look for the **Download package** link on the right side of the page.
-1. After you download the package, change its file extension from .nupkg to .zip.
-1. Unzip the package.
-1. Open the *streamjsonrpc.1.5.43/lib/netstandard1.1/* folder.
-1. Copy *StreamJsonRpc.dll* to the following locations in the Storage Explorer folder:
-
- - *StorageExplorer/resources/app/ServiceHub/Services/Microsoft.Developer.IdentityService/*
- - *StorageExplorer/resources/app/ServiceHub/Hosts/ServiceHub.Host.Core.CLR.x64/*
-
-## Open In Explorer button in the Azure portal doesn't work
-
-If the **Open In Explorer** button in the Azure portal doesn't work, make sure you're using a compatible browser. The following browsers were tested for compatibility:
--- Microsoft Edge-- Mozilla Firefox-- Google Chrome-- Microsoft Internet Explorer-
-## Gather logs
-
-When you report an issue to GitHub, you might be asked to gather certain logs to help diagnose your issue.
-
-### Storage Explorer logs
-
-Storage Explorer logs various things to its own application logs. You can easily get to these logs by selecting **Help** > **Open Logs Directory**. By default, Storage Explorer logs at a low level of verbosity. To change the verbosity level, go to **Settings** (the **gear** symbol on the left) > **Application** > **Logging** > **Log Level**. You can then set the log level as needed. For troubleshooting, the `debug` log level is recommended.
-
-Logs are split into folders for each session of Storage Explorer that you run. For whatever log files you need to share, place them in a zip archive, with files from different sessions in different folders.
-
-### Authentication logs
-
-For issues related to sign-in or Storage Explorer's authentication library, you'll most likely need to gather authentication logs. Authentication logs are stored at:
--- Windows: *C:\Users\\<your username\>\AppData\Local\Temp\servicehub\logs*-- macOS: *~/.ServiceHub/logs*-- Linux: *~/.ServiceHub/logs*-
-Generally, you can follow these steps to gather the logs:
-
-1. Go to **Settings** (the **gear** symbol on the left) > **Application** > **Sign-in**. Select **Verbose Authentication Logging**. If Storage Explorer fails to start because of an issue with its authentication library, this step will be done for you.
-1. Close Storage Explorer.
-1. Optional/recommended: Clear out existing logs from the *logs* folder. This step reduces the amount of information you have to send us.
-1. Open Storage Explorer and reproduce your issue.
-1. Close Storage Explorer.
-1. Zip the contents of the *logs* folder.
-
-### AzCopy logs
-
-If you're having trouble transferring data, you might need to get the AzCopy logs. AzCopy logs can be found easily via two different methods:
--- For failed transfers still in the Activity Log, select **Go to AzCopy Log File**.-- For transfers that failed in the past, go to the AzCopy logs folder. This folder can be found at:-
- - Windows: *C:\Users\\<your username\>\\.azcopy*
- - macOS: *~/.azcopy*
- - Linux: *~/.azcopy*
-
-### Network logs
-
-For some issues, you'll need to provide logs of the network calls made by Storage Explorer. On Windows, you can get network logs by using Fiddler.
-
-> [!NOTE]
-> Fiddler traces might contain passwords you entered or sent in your browser during the gathering of the trace. Make sure to read the instructions on how to sanitize a Fiddler trace. Don't upload Fiddler traces to GitHub. You'll be told where you can securely send your Fiddler trace.
-
-#### Part 1: Install and configure Fiddler
-
-1. Install Fiddler.
-1. Start Fiddler.
-1. Go to **Tools** > **Options**.
-1. Select the **HTTPS** tab.
-1. Make sure **Capture CONNECTs** and **Decrypt HTTPS traffic** are selected.
-1. Select **Actions**.
-1. Select **Trust Root Certificate** and then select **Yes** in the next dialog.
-1. Start Storage Explorer.
-1. Go to **Settings** (the **gear** symbol on the left) > **Application** > **Proxy**
-1. Change the proxy source dropdown to be **Use system proxy (preview)**.
-1. Restart Storage Explorer.
-1. You should start seeing network calls from a `storageexplorer:` process show up in Fiddler.
-
-#### Part 2: Reproduce the issue
-
-1. Close all apps other than Fiddler.
-1. Clear the Fiddler log by using the **X** in the top left, near the **View** menu.
-1. Optional/recommended: Let Fiddler set for a few minutes. If you see network calls appear that aren't related to Storage Explorer, right-click them and select **Filter Now** > **Hide \<process name\>**.
-1. Start/restart Storage Explorer.
-1. Reproduce the issue.
-1. Select **File** > **Save** > **All Sessions**. Save it somewhere you won't forget.
-1. Close Fiddler and Storage Explorer.
-
-#### Part 3: Sanitize the Fiddler trace
-
-1. Double-click the Fiddler trace (.saz file).
-1. Select **Ctrl+F**.
-1. In the dialog that appears, make sure the following options are set: **Search** = **Requests and responses** and **Examine** = **Headers and bodies**.
-1. Search for any passwords you used while you collected the Fiddler trace and any entries that are highlighted. Right-click and select **Remove** > **Selected sessions**.
-1. If you definitely entered passwords into your browser while you collected the trace but you don't find any entries when you use **Ctrl+F**, you don't want to change your passwords, or if the passwords you used are used for other accounts, skip sending us the .saz file.
-1. Save the trace again with a new name.
-1. Optional: Delete the original trace.
-
-## Next steps
-
-If none of these solutions work for you, you can:
--- [Create a support ticket](https://aka.ms/storageexplorer/servicerequest).-- [Open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues) by selecting the **Report issue to GitHub** button in the lower-left corner.-
-![Feedback](./media/storage-explorer-troubleshooting/feedback-button.PNG)
storage Storage Monitoring Diagnosing Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-monitoring-diagnosing-troubleshooting.md
- Title: Monitor and troubleshoot Azure Storage (classic logs & metrics)
-description: Use features like storage analytics, client-side logging, and other third-party tools to identify, diagnose, and troubleshoot Azure Storage-related issues.
--- Previously updated : 05/23/2022------
-# Monitor, diagnose, and troubleshoot Microsoft Azure Storage (classic)
-
-This guide shows you how to use features such as Azure Storage Analytics, client-side logging in the Azure Storage Client Library, and other third-party tools to identify, diagnose, and troubleshoot Azure Storage related issues.
-
-![Diagram that shows the flow of information between client applications and Azure storage services.][1]
-
-This guide is intended to be read primarily by developers of online services that use Azure Storage Services and IT Pros responsible for managing such online services. The goals of this guide are:
--- To help you maintain the health and performance of your Azure Storage accounts.-- To provide you with the necessary processes and tools to help you decide whether an issue or problem in an application relates to Azure Storage.-- To provide you with actionable guidance for resolving problems related to Azure Storage.-
-> [!NOTE]
-> This article is based on using Storage Analytics metrics and logs as referred to as *Classic metrics and logs*. We recommend that you use Azure Storage metrics and logs in Azure Monitor instead of Storage Analytics logs. To learn more, see any of the following articles:
->
-> - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
-> - [Monitoring Azure Files](../files/storage-files-monitoring.md)
-> - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
-> - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
-
-## Overview
-
-Diagnosing and troubleshooting issues in a distributed application hosted in a cloud environment can be more complex than in traditional environments. Applications can be deployed in a PaaS or IaaS infrastructure, on premises, on a mobile device, or in some combination of these environments. Typically, your application's network traffic may traverse public and private networks and your application may use multiple storage technologies such as Microsoft Azure Storage Tables, Blobs, Queues, or Files in addition to other data stores such as relational and document databases.
-
-To manage such applications successfully you should monitor them proactively and understand how to diagnose and troubleshoot all aspects of them and their dependent technologies. As a user of Azure Storage services, you should continuously monitor the Storage services your application uses for any unexpected changes in behavior (such as slower than usual response times), and use logging to collect more detailed data and to analyze a problem in depth. The diagnostics information you obtain from both monitoring and logging will help you to determine the root cause of the issue your application encountered. Then you can troubleshoot the issue and determine the appropriate steps you can take to remediate it. Azure Storage is a core Azure service, and forms an important part of the majority of solutions that customers deploy to the Azure infrastructure. Azure Storage includes capabilities to simplify monitoring, diagnosing, and troubleshooting storage issues in your cloud-based applications.
-
-### <a name="how-this-guide-is-organized"></a>How this guide is organized
-
-The section "[Monitoring your storage service]" describes how to monitor the health and performance of your Azure Storage services using Azure Storage Analytics Metrics (Storage Metrics).
-
-The section "[Diagnosing storage issues]" describes how to diagnose issues using Azure Storage Analytics Logging (Storage Logging). It also describes how to enable client-side logging using the facilities in one of the client libraries such as the Storage Client Library for .NET or the Azure SDK for Java.
-
-The section "[End-to-end tracing]" describes how you can correlate the information contained in various log files and metrics data.
-
-The section "[Troubleshooting guidance]" provides troubleshooting guidance for some of the common storage-related issues you might encounter.
-
-The "[Appendices]" include information about using other tools such as Wireshark and Netmon for analyzing network packet data, and Fiddler for analyzing HTTP/HTTPS messages.
-
-## <a name="monitoring-your-storage-service"></a>Monitoring your storage service
-
-If you're familiar with Windows performance monitoring, you can think of Storage Metrics as being an Azure Storage equivalent of Windows Performance Monitor counters. In Storage Metrics, you'll find a comprehensive set of metrics (counters in Windows Performance Monitor terminology) such as service availability, total number of requests to service, or percentage of successful requests to service. For a full list of the available metrics, see [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema). You can specify whether you want the storage service to collect and aggregate metrics every hour or every minute. For more information about how to enable metrics and monitor your storage accounts, see [Enabling storage metrics and viewing metrics data](../blobs/monitor-blob-storage.md).
-
-You can choose which hourly metrics you want to display in the [Azure portal](https://portal.azure.com) and configure rules that notify administrators by email whenever an hourly metric exceeds a particular threshold. For more information, see [Receive Alert Notifications](../../azure-monitor/alerts/alerts-overview.md).
-
-We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=/azure/azure-monitor/toc.json) (preview). It's a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It doesn't require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
-
-The storage service collects metrics using a best effort, but may not record every storage operation.
-
-In the Azure portal, you can view metrics such as availability, total requests, and average latency numbers for a storage account. A notification rule has also been set up to alert an administrator if availability drops below a certain level. From viewing this data, one possible area for investigation is the table service success percentage being below 100% (for more information, see the section "[Metrics show low PercentSuccess or analytics log entries have operations with transaction status of ClientOtherErrors]").
-
-You should continuously monitor your Azure applications to ensure they're healthy and performing as expected by:
--- Establishing some baseline metrics for application that will enable you to compare current data and identify any significant changes in the behavior of Azure storage and your application. The values of your baseline metrics will, in many cases, be application specific and you should establish them when you're performance testing your application.-- Recording minute metrics and using them to monitor actively for unexpected errors and anomalies such as spikes in error counts or request rates.-- Recording hourly metrics and using them to monitor average values such as average error counts and request rates.-- Investigating potential issues using diagnostics tools as discussed later in the section "[Diagnosing storage issues]."-
-The charts in the following image illustrate how the averaging that occurs for hourly metrics can hide spikes in activity. The hourly metrics appear to show a steady rate of requests, while the minute metrics reveal the fluctuations that are really taking place.
-
-![Charts that show how the averaging that occurs for hourly metrics can hide spikes in activity.][3]
-
-The remainder of this section describes what metrics you should monitor and why.
-
-### <a name="monitoring-service-health"></a>Monitoring service health
-
-You can use the [Azure portal](https://portal.azure.com) to view the health of the Storage service (and other Azure services) in all the Azure regions around the world. Monitoring enables you to see immediately if an issue outside of your control is affecting the Storage service in the region you use for your application.
-
-The [Azure portal](https://portal.azure.com) can also provide notifications of incidents that affect the various Azure services.
-Note: This information was previously available, along with historical data, on the [Azure Service Dashboard](https://azure.status.microsoft).
-For more information about Application Insights for Azure DevOps, see the appendix "[Appendix 5: Monitoring with Application Insights for Azure DevOps](#appendix-5)."
-
-### <a name="monitoring-capacity"></a>Monitoring capacity
-
-Storage Metrics only stores capacity metrics for the blob service because blobs typically account for the largest proportion of stored data (at the time of writing, it's not possible to use Storage Metrics to monitor the capacity of your tables and queues). You can find this data in the **$MetricsCapacityBlob** table if you have enabled monitoring for the Blob service. Storage Metrics records this data once per day, and you can use the value of the **RowKey** to determine whether the row contains an entity that relates to user data (value **data**) or analytics data (value **analytics**). Each stored entity contains information about the amount of storage used (**Capacity** measured in bytes) and the current number of containers (**ContainerCount**) and blobs (**ObjectCount**) in use in the storage account. For more information about the capacity metrics stored in the **$MetricsCapacityBlob** table, see [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema).
-
-> [!NOTE]
-> You should monitor these values for an early warning that you're approaching the capacity limits of your storage account. In the Azure portal, you can add alert rules to notify you if aggregate storage use exceeds or falls below thresholds that you specify.
->
->
-
-For help estimating the size of various storage objects such as blobs, see the blog post [Understanding Azure Storage Billing ΓÇô Bandwidth, Transactions, and Capacity](/archive/blogs/patrick_butler_monterde/azure-storage-understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity).
-
-### <a name="monitoring-availability"></a>Monitoring availability
-
-You should monitor the availability of the storage services in your storage account by monitoring the value in the **Availability** column in the hourly or minute metrics tables ΓÇö **$MetricsHourPrimaryTransactionsBlob**, **$MetricsHourPrimaryTransactionsTable**, **$MetricsHourPrimaryTransactionsQueue**, **$MetricsMinutePrimaryTransactionsBlob**, **$MetricsMinutePrimaryTransactionsTable**, **$MetricsMinutePrimaryTransactionsQueue**, **$MetricsCapacityBlob**. The **Availability** column contains a percentage value that indicates the availability of the service or the API operation represented by the row (the **RowKey** shows if the row contains metrics for the service as a whole or for a specific API operation).
-
-Any value less than 100% indicates that some storage requests are failing. You can see why they're failing by examining the other columns in the metrics data that show the numbers of requests with different error types such as **ServerTimeoutError**. You should expect to see **Availability** fall temporarily below 100% for reasons such as transient server timeouts while the service moves partitions to better load-balance request; the retry logic in your client application should handle such intermittent conditions. The article [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/Storage-Analytics-Logged-Operations-and-Status-Messages) lists the transaction types that Storage Metrics includes in its **Availability** calculation.
-
-In the [Azure portal](https://portal.azure.com), you can add alert rules to notify you if **Availability** for a service falls below a threshold that you specify.
-
-The "[Troubleshooting guidance]" section of this guide describes some common storage service issues related to availability.
-
-### <a name="monitoring-performance"></a>Monitoring performance
-
-To monitor the performance of the storage services, you can use the following metrics from the hourly and minute metrics tables.
--- The values in the **AverageE2ELatency** and **AverageServerLatency** columns show the average time the storage service or API operation type is taking to process requests. **AverageE2ELatency** is a measure of end-to-end latency that includes the time taken to read the request and send the response in addition to the time taken to process the request (therefore includes network latency once the request reaches the storage service); **AverageServerLatency** is a measure of just the processing time and therefore excludes any network latency related to communicating with the client. See the section "[Metrics show high AverageE2ELatency and low AverageServerLatency]" later in this guide for a discussion of why there might be a significant difference between these two values.-- The values in the **TotalIngress** and **TotalEgress** columns show the total amount of data, in bytes, coming in to and going out of your storage service or through a specific API operation type.-- The values in the **TotalRequests** column show the total number of requests that the storage service of API operation is receiving. **TotalRequests** is the total number of requests that the storage service receives.-
-Typically, you'll monitor for unexpected changes in any of these values as an indicator that you have an issue that requires investigation.
-
-In the [Azure portal](https://portal.azure.com), you can add alert rules to notify you if any of the performance metrics for this service fall below or exceed a threshold that you specify.
-
-The "[Troubleshooting guidance]" section of this guide describes some common storage service issues related to performance.
-
-## <a name="diagnosing-storage-issues"></a>Diagnosing storage issues
-
-There are a number of ways that you might become aware of a problem or issue in your application, including:
--- A major failure that causes the application to crash or to stop working.-- Significant changes from baseline values in the metrics you're monitoring as described in the previous section "[Monitoring your storage service]."-- Reports from users of your application that some particular operation didn't complete as expected or that some feature isn't working.-- Errors generated within your application that appear in log files or through some other notification method.-
-Typically, issues related to Azure storage services fall into one of four broad categories:
--- Your application has a performance issue, either reported by your users, or revealed by changes in the performance metrics.-- There's a problem with the Azure Storage infrastructure in one or more regions.-- Your application is encountering an error, either reported by your users, or revealed by an increase in one of the error count metrics you monitor.-- During development and test, you may be using the local storage emulator; you may encounter some issues that relate specifically to usage of the storage emulator.-
-The following sections outline the steps you should follow to diagnose and troubleshoot issues in each of these four categories. The section "[Troubleshooting guidance]" later in this guide provides more detail for some common issues you may encounter.
-
-### <a name="service-health-issues"></a>Service health issues
-
-Service health issues are typically outside of your control. The [Azure portal](https://portal.azure.com) provides information about any ongoing issues with Azure services including storage services. If you opted for Read-Access Geo-Redundant Storage when you created your storage account, then if your data becomes unavailable in the primary location, your application can switch temporarily to the read-only copy in the secondary location. To read from the secondary, your application must be able to switch between using the primary and secondary storage locations, and be able to work in a reduced functionality mode with read-only data. The Azure Storage Client libraries allow you to define a retry policy that can read from secondary storage in case a read from primary storage fails. Your application also needs to be aware that the data in the secondary location is eventually consistent. For more information, see the blog post [Azure Storage Redundancy Options and Read Access Geo Redundant Storage](https://blogs.msdn.microsoft.com/windowsazurestorage/2013/12/11/windows-azure-storage-redundancy-options-and-read-access-geo-redundant-storage/).
-
-### <a name="performance-issues"></a>Performance issues
-
-The performance of an application can be subjective, especially from a user perspective. Therefore, it's important to have baseline metrics available to help you identify where there might be a performance issue. Many factors might affect the performance of an Azure storage service from the client application perspective. These factors might operate in the storage service, in the client, or in the network infrastructure; therefore it's important to have a strategy for identifying the origin of the performance issue.
-
-After you've identified the likely location of the cause of the performance issue from the metrics, you can then use the log files to find detailed information to diagnose and troubleshoot the problem further.
-
-The section "[Troubleshooting guidance]" later in this guide provides more information about some common performance-related issues you may encounter.
-
-### <a name="diagnosing-errors"></a>Diagnosing errors
-
-Users of your application may notify you of errors reported by the client application. Storage Metrics also records counts of different error types from your storage services such as **NetworkError**, **ClientTimeoutError**, or **AuthorizationError**. While Storage Metrics only records counts of different error types, you can obtain more detail about individual requests by examining server-side, client-side, and network logs. Typically, the HTTP status code returned by the storage service will give an indication of why the request failed.
-
-> [!NOTE]
-> Remember that you should expect to see some intermittent errors: for example, errors due to transient network conditions, or application errors.
->
->
-
-The following resources are useful for understanding storage-related status and error codes:
--- [Common REST API Error Codes](/rest/api/storageservices/Common-REST-API-Error-Codes)-- [Blob Service Error Codes](/rest/api/storageservices/Blob-Service-Error-Codes)-- [Queue Service Error Codes](/rest/api/storageservices/Queue-Service-Error-Codes)-- [Table Service Error Codes](/rest/api/storageservices/Table-Service-Error-Codes)-- [File Service Error Codes](/rest/api/storageservices/File-Service-Error-Codes)-
-### <a name="storage-emulator-issues"></a>Storage emulator issues
-
-The Azure SDK includes a storage emulator you can run on a development workstation. This emulator simulates most of the behavior of the Azure storage services and is useful during development and test, enabling you to run applications that use Azure storage services without the need for an Azure subscription and an Azure storage account.
-
-The "[Troubleshooting guidance]" section of this guide describes some common issues encountered using the storage emulator.
-
-### <a name="storage-logging-tools"></a>Storage logging tools
-
-Storage Logging provides server-side logging of storage requests in your Azure storage account. For more information about how to enable server-side logging and access the log data, see [Enabling Storage Logging and Accessing Log Data](./storage-analytics-logging.md).
-
-The Storage Client Library for .NET enables you to collect client-side log data that relates to storage operations performed by your application. For more information, see [Client-side Logging with the .NET Storage Client Library](/rest/api/storageservices/Client-side-Logging-with-the-.NET-Storage-Client-Library).
-
-> [!NOTE]
-> In some circumstances (such as SAS authorization failures), a user may report an error for which you can find no request data in the server-side Storage logs. You can use the logging capabilities of the Storage Client Library to investigate if the cause of the issue is on the client or use network monitoring tools to investigate the network.
->
->
-
-### <a name="using-network-logging-tools"></a>Using network logging tools
-
-You can capture the traffic between the client and server to provide detailed information about the data the client and server are exchanging and the underlying network conditions. Useful network logging tools include:
--- [Fiddler](https://www.telerik.com/fiddler) is a free web debugging proxy that enables you to examine the headers and payload data of HTTP and HTTPS request and response messages. For more information, see [Appendix 1: Using Fiddler to capture HTTP and HTTPS traffic](#appendix-1).-- [Microsoft Network Monitor (Netmon)](https://download.cnet.com/s/network-monitor/) and [Wireshark](https://www.wireshark.org/) are free network protocol analyzers that enable you to view detailed packet information for a wide range of network protocols. For more information about Wireshark, see "[Appendix 2: Using Wireshark to capture network traffic](#appendix-2)".-- If you want to perform a basic connectivity test to check that your client machine can connect to the Azure storage service over the network, you cannot do this using the standard **ping** tool on the client. However, you can use the [**tcping** tool](https://www.elifulkerson.com/projects/tcping.php) to check connectivity.-
-In many cases, the log data from Storage Logging and the Storage Client Library will be sufficient to diagnose an issue, but in some scenarios, you may need the more detailed information that these network logging tools can provide. For example, using Fiddler to view HTTP and HTTPS messages enables you to view header and payload data sent to and from the storage services, which would enable you to examine how a client application retries storage operations. Protocol analyzers such as Wireshark operate at the packet level enabling you to view TCP data, which would enable you to troubleshoot lost packets and connectivity issues.
-
-## <a name="end-to-end-tracing"></a>End-to-end tracing
-
-End-to-end tracing using a variety of log files is a useful technique for investigating potential issues. You can use the date/time information from your metrics data as an indication of where to start looking in the log files for the detailed information that will help you troubleshoot the issue.
-
-### <a name="correlating-log-data"></a>Correlating log data
-
-When viewing logs from client applications, network traces, and server-side storage logging it's critical to be able to correlate requests across the different log files. The log files include a number of different fields that are useful as correlation identifiers. The client request ID is the most useful field to use to correlate entries in the different logs. However sometimes, it can be useful to use either the server request ID or timestamps. The following sections provide more details about these options.
-
-### <a name="client-request-id"></a>Client request ID
-
-The Storage Client Library automatically generates a unique client request ID for every request.
--- In the client-side log that the Storage Client Library creates, the client request ID appears in the **Client Request ID** field of every log entry relating to the request.-- In a network trace such as one captured by Fiddler, the client request ID is visible in request messages as the **x-ms-client-request-id** HTTP header value.-- In the server-side Storage Logging log, the client request ID appears in the Client request ID column.-
-> [!NOTE]
-> It's possible for multiple requests to share the same client request ID because the client can assign this value (although the Storage Client Library assigns a
-> new value automatically). When the client retries, all attempts share the same client request ID. In the case of a batch sent from the client, the batch has a single client request ID.
->
->
-
-### <a name="server-request-id"></a>Server request ID
-
-The storage service automatically generates server request IDs.
--- In the server-side Storage Logging log, the server request ID appears the **Request ID header** column.-- In a network trace such as one captured by Fiddler, the server request ID appears in response messages as the **x-ms-request-id** HTTP header value.-- In the client-side log that the Storage Client Library creates, the server request ID appears in the **Operation Text** column for the log entry showing details of the server response.-
-> [!NOTE]
-> The storage service always assigns a unique server request ID to every request it receives, so every retry attempt from the client and every operation included in a batch has a unique server request ID.
->
->
-
-The code sample below demonstrates how to use a custom client request ID.
--
-### <a name="timestamps"></a>Timestamps
-
-You can also use timestamps to locate related log entries, but be careful of any clock skew between the client and server that may exist. Search plus or minus 15 minutes for matching server-side entries based on the timestamp on the client. Remember that the blob metadata for the blobs containing metrics indicates the time range for the metrics stored in the blob. This time range is useful if you've many metrics blobs for the same minute or hour.
-
-## <a name="troubleshooting-guidance"></a>Troubleshooting guidance
-
-This section will help you with the diagnosis and troubleshooting of some of the common issues your application may encounter when using the Azure storage services. Use the list below to locate the information relevant to your specific issue.
-
-**Troubleshooting Decision Tree**
--
-Does your issue relate to the performance of one of the storage services?
--- [Metrics show high AverageE2ELatency and low AverageServerLatency]-- [Metrics show low AverageE2ELatency and low AverageServerLatency but the client is experiencing high latency]-- [Metrics show high AverageServerLatency]-- [You're experiencing unexpected delays in message delivery on a queue]--
-Does your issue relate to the availability of one of the storage services?
--- [Metrics show an increase in PercentThrottlingError]-- [Metrics show an increase in PercentTimeoutError]-- [Metrics show an increase in PercentNetworkError]--
- Is your client application receiving an HTTP 4XX (such as 404) response from a storage service?
--- [The client is receiving HTTP 403 (Forbidden) messages]-- [The client is receiving HTTP 404 (Not found) messages]-- [The client is receiving HTTP 409 (Conflict) messages]--
-[Metrics show low PercentSuccess or analytics log entries have operations with transaction status of ClientOtherErrors]
--
-[Capacity metrics show an unexpected increase in storage capacity usage]
---
-[Your issue arises from using the storage emulator for development or test]
--
-[You're encountering problems installing the Azure SDK for .NET]
--
-[You have a different issue with a storage service]
---
-### <a name="metrics-show-high-AverageE2ELatency-and-low-AverageServerLatency"></a>Metrics show high AverageE2ELatency and low AverageServerLatency
-
-The illustration below from the [Azure portal](https://portal.azure.com) monitoring tool shows an example where the **AverageE2ELatency** is significantly higher than the **AverageServerLatency**.
-
-![Illustration from the Azure portal that shows an example where the AverageE2ELatency is significantly higher than the AverageServerLatency.][4]
-
-The storage service only calculates the metric **AverageE2ELatency** for successful requests and, unlike **AverageServerLatency**, includes the time the client takes to send the data and receive acknowledgment from the storage service. Therefore, a difference between **AverageE2ELatency** and **AverageServerLatency** could be either due to the client application being slow to respond, or due to conditions on the network.
-
-> [!NOTE]
-> You can also view **E2ELatency** and **ServerLatency** for individual storage operations in the Storage Logging log data.
->
->
-
-#### Investigating client performance issues
-
-Possible reasons for the client responding slowly include having a limited number of available connections or threads, or being low on resources such as CPU, memory or network bandwidth. You may be able to resolve the issue by modifying the client code to be more efficient (for example by using asynchronous calls to the storage service), or by using a larger Virtual Machine (with more cores and more memory).
-
-For the table and queue services, the Nagle algorithm can also cause high **AverageE2ELatency** as compared to **AverageServerLatency**: for more information, see the post [Nagle's Algorithm is Not Friendly towards Small Requests](/archive/blogs/windowsazurestorage/nagles-algorithm-is-not-friendly-towards-small-requests). You can disable the Nagle algorithm in code by using the **ServicePointManager** class in the **System.Net** namespace. You should do this before you make any calls to the table or queue services in your application since this doesn't affect connections that are already open. The following example comes from the **Application_Start** method in a worker role.
--
-You should check the client-side logs to see how many requests your client application is submitting, and check for general .NET related performance bottlenecks in your client such as CPU, .NET garbage collection, network utilization, or memory. As a starting point for troubleshooting .NET client applications, see [Debugging, Tracing, and Profiling](/dotnet/framework/debug-trace-profile/).
-
-#### Investigating network latency issues
-
-Typically, high end-to-end latency caused by the network is due to transient conditions. You can investigate both transient and persistent network issues such as dropped packets by using tools such as Wireshark.
-
-For more information about using Wireshark to troubleshoot network issues, see "[Appendix 2: Using Wireshark to capture network traffic]."
-
-### <a name="metrics-show-low-AverageE2ELatency-and-low-AverageServerLatency"></a>Metrics show low AverageE2ELatency and low AverageServerLatency but the client is experiencing high latency
-
-In this scenario, the most likely cause is a delay in the storage requests reaching the storage service. You should investigate why requests from the client are not making it through to the blob service.
-
-One possible reason for the client delaying sending requests is that there are a limited number of available connections or threads.
-
-Also check whether the client is performing multiple retries, and investigate the reason if it is. To determine whether the client is performing multiple retries, you can:
--- Examine the Storage Analytics logs. If multiple retries are happening, you'll see multiple operations with the same client request ID but with different server request IDs.-- Examine the client logs. Verbose logging will indicate that a retry has occurred.-- Debug your code, and check the properties of the **OperationContext** object associated with the request. If the operation has retried, the **RequestResults** property will include multiple unique server request IDs. You can also check the start and end times for each request. For more information, see the code sample in the section [Server request ID].-
-If there are no issues in the client, you should investigate potential network issues such as packet loss. You can use tools such as Wireshark to investigate network issues.
-
-For more information about using Wireshark to troubleshoot network issues, see "[Appendix 2: Using Wireshark to capture network traffic]."
-
-### <a name="metrics-show-high-AverageServerLatency"></a>Metrics show high AverageServerLatency
-
-In the case of high **AverageServerLatency** for blob download requests, you should use the Storage Logging logs to see if there are repeated requests for the same blob (or set of blobs). For blob upload requests, you should investigate what block size the client is using (for example, blocks less than 64 K in size can result in overheads unless the reads are also in less than 64 K chunks), and if multiple clients are uploading blocks to the same blob in parallel. You should also check the per-minute metrics for spikes in the number of requests that result in exceeding the per second scalability targets: also see "[Metrics show an increase in PercentTimeoutError]."
-
-If you're seeing high **AverageServerLatency** for blob download requests when there are repeated requests the same blob or set of blobs, then you should consider caching these blobs using Azure Cache or the Azure Content Delivery Network (CDN). For upload requests, you can improve the throughput by using a larger block size. For queries to tables, it's also possible to implement client-side caching on clients that perform the same query operations and where the data doesn't change frequently.
-
-High **AverageServerLatency** values can also be a symptom of poorly designed tables or queries that result in scan operations or that follow the append/prepend anti-pattern. For more information, see "[Metrics show an increase in PercentThrottlingError]".
-
-> [!NOTE]
-> You can find a comprehensive checklist performance checklist here: [Microsoft Azure Storage Performance and Scalability Checklist](../blobs/storage-performance-checklist.md).
->
->
-
-### <a name="you-are-experiencing-unexpected-delays-in-message-delivery"></a>You're experiencing unexpected delays in message delivery on a queue
-
-If you're experiencing a delay between the time an application adds a message to a queue and the time it becomes available to read from the queue, then you should take the following steps to diagnose the issue:
--- Verify the application is successfully adding the messages to the queue. Check that the application isn't retrying the **AddMessage** method several times before succeeding. The Storage Client Library logs will show any repeated retries of storage operations.-- Verify there's no clock skew between the worker role that adds the message to the queue and the worker role that reads the message from the queue that makes it appear as if there's a delay in processing.-- Check if the worker role that reads the messages from the queue is failing. If a queue client calls the **GetMessage** method but fails to respond with an acknowledgment, the message will remain invisible on the queue until the **invisibilityTimeout** period expires. At this point, the message becomes available for processing again.-- Check if the queue length is growing over time. This can occur if you do not have sufficient workers available to process all of the messages that other workers are placing on the queue. Also check the metrics to see if delete requests are failing and the dequeue count on messages, which might indicate repeated failed attempts to delete the message.-- Examine the Storage Logging logs for any queue operations that have higher than expected **E2ELatency** and **ServerLatency** values over a longer period of time than usual.-
-### <a name="metrics-show-an-increase-in-PercentThrottlingError"></a>Metrics show an increase in PercentThrottlingError
-
-Throttling errors occur when you exceed the scalability targets of a storage service. The storage service throttles to ensure that no single client or tenant can use the service at the expense of others. For more information, see [Scalability and performance targets for standard storage accounts](scalability-targets-standard-account.md) for details on scalability targets for storage accounts and performance targets for partitions within storage accounts.
-
-If the **PercentThrottlingError** metric show an increase in the percentage of requests that are failing with a throttling error, you need to investigate one of two scenarios:
--- [Transient increase in PercentThrottlingError]-- [Permanent increase in PercentThrottlingError error]-
-An increase in **PercentThrottlingError** often occurs at the same time as an increase in the number of storage requests, or when you're initially load testing your application. This may also manifest itself in the client as "503 Server Busy" or "500 Operation Timeout" HTTP status messages from storage operations.
-
-#### <a name="transient-increase-in-PercentThrottlingError"></a>Transient increase in PercentThrottlingError
-
-If you're seeing spikes in the value of **PercentThrottlingError** that coincide with periods of high activity for the application, you implement an exponential (not linear) back-off strategy for retries in your client. Back-off retries reduce the immediate load on the partition and help your application to smooth out spikes in traffic. For more information about how to implement retry policies using the Storage Client Library, see the [Microsoft.Azure.Storage.RetryPolicies namespace](/dotnet/api/microsoft.azure.storage.retrypolicies).
-
-> [!NOTE]
-> You may also see spikes in the value of **PercentThrottlingError** that do not coincide with periods of high activity for the application: the most likely cause here is the storage service moving partitions to improve load balancing.
->
->
-
-#### <a name="permanent-increase-in-PercentThrottlingError"></a>Permanent increase in PercentThrottlingError error
-
-If you're seeing a consistently high value for **PercentThrottlingError** following a permanent increase in your transaction volumes, or when you're performing your initial load tests on your application, then you need to evaluate how your application is using storage partitions and whether it's approaching the scalability targets for a storage account. For example, if you're seeing throttling errors on a queue (which counts as a single partition), then you should consider using additional queues to spread the transactions across multiple partitions. If you're seeing throttling errors on a table, you need to consider using a different partitioning scheme to spread your transactions across multiple partitions by using a wider range of partition key values. One common cause of this issue is the prepend/append anti-pattern where you select the date as the partition key and then all data on a particular day is written to one partition: under load, this can result in a write bottleneck. Either consider a different partitioning design or evaluate whether using blob storage might be a better solution. Also check whether throttling is occurring as a result of spikes in your traffic and investigate ways of smoothing your pattern of requests.
-
-If you distribute your transactions across multiple partitions, you must still be aware of the scalability limits set for the storage account. For example, if you used ten queues each processing the maximum of 2,000 1KB messages per second, you'll be at the overall limit of 20,000 messages per second for the storage account. If you need to process more than 20,000 entities per second, you should consider using multiple storage accounts. You should also bear in mind that the size of your requests and entities has an impact on when the storage service throttles your clients: if you have larger requests and entities, you may be throttled sooner.
-
-Inefficient query design can also cause you to hit the scalability limits for table partitions. For example, a query with a filter that only selects one percent of the entities in a partition but that scans all the entities in a partition will need to access each entity. Every entity read will count towards the total number of transactions in that partition; therefore, you can easily reach the scalability targets.
-
-> [!NOTE]
-> Your performance testing should reveal any inefficient query designs in your application.
->
->
-
-### <a name="metrics-show-an-increase-in-PercentTimeoutError"></a>Metrics show an increase in PercentTimeoutError
-
-Your metrics show an increase in **PercentTimeoutError** for one of your storage services. At the same time, the client receives a high volume of "500 Operation Timeout" HTTP status messages from storage operations.
-
-> [!NOTE]
-> You may see timeout errors temporarily as the storage service load balances requests by moving a partition to a new server.
->
->
-
-The **PercentTimeoutError** metric is an aggregation of the following metrics: **ClientTimeoutError**, **AnonymousClientTimeoutError**, **SASClientTimeoutError**, **ServerTimeoutError**, **AnonymousServerTimeoutError**, and **SASServerTimeoutError**.
-
-The server timeouts are caused by an error on the server. The client timeouts happen because an operation on the server has exceeded the timeout specified by the client; for example, a client using the Storage Client Library can set a timeout for an operation by using the **ServerTimeout** property of the **QueueRequestOptions** class.
-
-Server timeouts indicate a problem with the storage service that requires further investigation. You can use metrics to see if you're hitting the scalability limits for the service and to identify any spikes in traffic that might be causing this problem. If the problem is intermittent, it may be due to load-balancing activity in the service. If the problem is persistent and isn't caused by your application hitting the scalability limits of the service, you should raise a support issue. For client timeouts, you must decide if the timeout is set to an appropriate value in the client and either change the timeout value set in the client or investigate how you can improve the performance of the operations in the storage service, for example by optimizing your table queries or reducing the size of your messages.
-
-### <a name="metrics-show-an-increase-in-PercentNetworkError"></a>Metrics show an increase in PercentNetworkError
-
-Your metrics show an increase in **PercentNetworkError** for one of your storage services. The **PercentNetworkError** metric is an aggregation of the following metrics: **NetworkError**, **AnonymousNetworkError**, and **SASNetworkError**. These occur when the storage service detects a network error when the client makes a storage request.
-
-The most common cause of this error is a client disconnecting before a timeout expires in the storage service. Investigate the code in your client to understand why and when the client disconnects from the storage service. You can also use Wireshark, or Tcping to investigate network connectivity issues from the client. These tools are described in the [Appendices].
-
-### <a name="the-client-is-receiving-403-messages"></a>The client is receiving HTTP 403 (Forbidden) messages
-
-If your client application is throwing HTTP 403 (Forbidden) errors, a likely cause is that the client is using an expired Shared Access Signature (SAS) when it sends a storage request (although other possible causes include clock skew, invalid keys, and empty headers). If an expired SAS key is the cause, you'll not see any entries in the server-side Storage Logging log data. The following table shows a sample from the client-side log generated by the Storage Client Library that illustrates this issue occurring:
-
-| Source | Verbosity | Verbosity | Client request ID | Operation text |
-| | | | | |
-| Microsoft.Azure.Storage |Information |3 |85d077ab-… |Starting operation with location Primary per location mode PrimaryOnly. |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |Starting synchronous request to <https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests#Synchronous_request> |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |Waiting for response. |
-| Microsoft.Azure.Storage |Warning |2 |85d077ab -… |Exception thrown while waiting for response: The remote server returned an error: (403) Forbidden. |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |Response received. Status code = 403, Request ID = 9d67c64a-64ed-4b0d-9515-3b14bbcdc63d, Content-MD5 = , ETag = . |
-| Microsoft.Azure.Storage |Warning |2 |85d077ab -… |Exception thrown during the operation: The remote server returned an error: (403) Forbidden.. |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |Checking if the operation should be retried. Retry count = 0, HTTP status code = 403, Exception = The remote server returned an error: (403) Forbidden.. |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |The next location has been set to Primary, based on the location mode. |
-| Microsoft.Azure.Storage |Error |1 |85d077ab -… |Retry policy did not allow for a retry. Failing with The remote server returned an error: (403) Forbidden. |
-
-In this scenario, you should investigate why the SAS token is expiring before the client sends the token to the server:
--- Typically, you should not set a start time when you create a SAS for a client to use immediately. If there are small clock differences between the host generating the SAS using the current time and the storage service, then it's possible for the storage service to receive a SAS that isn't yet valid.-- Do not set a very short expiry time on a SAS. Again, small clock differences between the host generating the SAS and the storage service can lead to a SAS apparently expiring earlier than anticipated.-- Does the version parameter in the SAS key (for example **sv=2015-04-05**) match the version of the Storage Client Library you're using? We recommend that you always use the latest version of the [Storage Client Library](https://www.nuget.org/packages/WindowsAzure.Storage/).-- If you regenerate your storage access keys, any existing SAS tokens may be invalidated. This issue may arise if you generate SAS tokens with a long expiry time for client applications to cache.-
-If you're using the Storage Client Library to generate SAS tokens, then it's easy to build a valid token. However, if you're using the Storage REST API and constructing the SAS tokens by hand, see [Delegating Access with a Shared Access Signature](/rest/api/storageservices/delegate-access-with-shared-access-signature).
-
-### <a name="the-client-is-receiving-404-messages"></a>The client is receiving HTTP 404 (Not found) messages
-
-If the client application receives an HTTP 404 (Not found) message from the server, this implies that the object the client was attempting to use (such as an entity, table, blob, container, or queue) doesn't exist in the storage service. There are a number of possible reasons for this, such as:
--- [The client or another process previously deleted the object]-- [A Shared Access Signature (SAS) authorization issue]-- [Client-side JavaScript code doesn't have permission to access the object]-- [Network failure]-
-#### <a name="client-previously-deleted-the-object"></a>The client or another process previously deleted the object
-
-In scenarios where the client is attempting to read, update, or delete data in a storage service it's usually easy to identify in the server-side logs a previous operation that deleted the object in question from the storage service. Often, the log data shows that another user or process deleted the object. In the server-side Storage Logging log, the operation-type and requested-object-key columns show when a client deleted an object.
-
-In the scenario where a client is attempting to insert an object, it may not be immediately obvious why this results in an HTTP 404 (Not found) response given that the client is creating a new object. However, if the client is creating a blob it must be able to find the blob container, if the client is creating a message it must be able to find a queue, and if the client is adding a row it must be able to find the table.
-
-You can use the client-side log from the Storage Client Library to gain a more detailed understanding of when the client sends specific requests to the storage service.
-
-The following client-side log generated by the Storage Client library illustrates the problem when the client cannot find the container for the blob it's creating. This log includes details of the following storage operations:
-
-| Request ID | Operation |
-| | |
-| 07b26a5d-... |**DeleteIfExists** method to delete the blob container. Note that this operation includes a **HEAD** request to check for the existence of the container. |
-| e2d06d78… |**CreateIfNotExists** method to create the blob container. Note that this operation includes a **HEAD** request that checks for the existence of the container. The **HEAD** returns a 404 message but continues. |
-| de8b1c3c-... |**UploadFromStream** method to create the blob. The **PUT** request fails with a 404 message |
-
-Log entries:
-
-| Request ID | Operation Text |
-| | |
-| 07b26a5d-... |Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer`. |
-| 07b26a5d-... |StringToSign = HEAD............x-ms-client-request-id:07b26a5d-....x-ms-date:Tue, 03 Jun 2014 10:33:11 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container. |
-| 07b26a5d-... |Waiting for response. |
-| 07b26a5d-... |Response received. Status code = 200, Request ID = eeead849-...Content-MD5 = , ETag = &quot;0x8D14D2DC63D059B&quot;. |
-| 07b26a5d-... |Response headers were processed successfully, proceeding with the rest of the operation. |
-| 07b26a5d-... |Downloading response body. |
-| 07b26a5d-... |Operation completed successfully. |
-| 07b26a5d-... |Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer`. |
-| 07b26a5d-... |StringToSign = DELETE............x-ms-client-request-id:07b26a5d-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container. |
-| 07b26a5d-... |Waiting for response. |
-| 07b26a5d-... |Response received. Status code = 202, Request ID = 6ab2a4cf-..., Content-MD5 = , ETag = . |
-| 07b26a5d-... |Response headers were processed successfully, proceeding with the rest of the operation. |
-| 07b26a5d-... |Downloading response body. |
-| 07b26a5d-... |Operation completed successfully. |
-| e2d06d78-... |Starting asynchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer`.</td> |
-| e2d06d78-... |StringToSign = HEAD............x-ms-client-request-id:e2d06d78-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container. |
-| e2d06d78-... |Waiting for response. |
-| de8b1c3c-... |Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer/blobCreated.txt`. |
-| de8b1c3c-... |StringToSign = PUT...64.qCmF+TQLPhq/YYK50mP9ZQ==........x-ms-blob-type:BlockBlob.x-ms-client-request-id:de8b1c3c-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer/blobCreated.txt. |
-| de8b1c3c-... |Preparing to write request data. |
-| e2d06d78-... |Exception thrown while waiting for response: The remote server returned an error: (404) Not Found.. |
-| e2d06d78-... |Response received. Status code = 404, Request ID = 353ae3bc-..., Content-MD5 = , ETag = . |
-| e2d06d78-... |Response headers were processed successfully, proceeding with the rest of the operation. |
-| e2d06d78-... |Downloading response body. |
-| e2d06d78-... |Operation completed successfully. |
-| e2d06d78-... |Starting asynchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer`. |
-| e2d06d78-... |StringToSign = PUT...0.........x-ms-client-request-id:e2d06d78-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container. |
-| e2d06d78-... |Waiting for response. |
-| de8b1c3c-... |Writing request data. |
-| de8b1c3c-... |Waiting for response. |
-| e2d06d78-... |Exception thrown while waiting for response: The remote server returned an error: (409) Conflict.. |
-| e2d06d78-... |Response received. Status code = 409, Request ID = c27da20e-..., Content-MD5 = , ETag = . |
-| e2d06d78-... |Downloading error response body. |
-| de8b1c3c-... |Exception thrown while waiting for response: The remote server returned an error: (404) Not Found.. |
-| de8b1c3c-... |Response received. Status code = 404, Request ID = 0eaeab3e-..., Content-MD5 = , ETag = . |
-| de8b1c3c-... |Exception thrown during the operation: The remote server returned an error: (404) Not Found.. |
-| de8b1c3c-... |Retry policy did not allow for a retry. Failing with The remote server returned an error: (404) Not Found.. |
-| e2d06d78-... |Retry policy did not allow for a retry. Failing with The remote server returned an error: (409) Conflict.. |
-
-In this example, the log shows that the client is interleaving requests from the **CreateIfNotExists** method (request ID e2d06d78…) with the requests from the **UploadFromStream** method (de8b1c3c-...). This interleaving happens because the client application is invoking these methods asynchronously. Modify the asynchronous code in the client to ensure that it creates the container before attempting to upload any data to a blob in that container. Ideally, you should create all your containers in advance.
-
-#### <a name="SAS-authorization-issue"></a>A Shared Access Signature (SAS) authorization issue
-
-If the client application attempts to use a SAS key that doesn't include the necessary permissions for the operation, the storage service returns an HTTP 404 (Not found) message to the client. At the same time, you'll also see a non-zero value for **SASAuthorizationError** in the metrics.
-
-The following table shows a sample server-side log message from the Storage Logging log file:
-
-| Name | Value |
-| | |
-| Request start time | 2014-05-30T06:17:48.4473697Z |
-| Operation type | GetBlobProperties |
-| Request status | SASAuthorizationError |
-| HTTP status code | 404 |
-| Authentication type| Sas |
-| Service type | Blob |
-| Request URL | `https://domemaildist.blob.core.windows.net/azureimblobcontainer/blobCreatedViaSAS.txt` |
-| &nbsp; | ?sv=2014-02-14&sr=c&si=mypolicy&sig=XXXXX&;api-version=2014-02-14 |
-| Request ID header | a1f348d5-8032-4912-93ef-b393e5252a3b |
-| Client request ID | 2d064953-8436-4ee0-aa0c-65cb874f7929 |
-
-Investigate why your client application is attempting to perform an operation for which it has not been granted permissions.
-
-#### <a name="JavaScript-code-does-not-have-permission"></a>Client-side JavaScript code doesn't have permission to access the object
-
-If you're using a JavaScript client and the storage service is returning HTTP 404 messages, you check for the following JavaScript errors in the browser:
-
-```
-SEC7120: Origin http://localhost:56309 not found in Access-Control-Allow-Origin header.
-SCRIPT7002: XMLHttpRequest: Network Error 0x80070005, Access is denied.
-```
-
-> [!NOTE]
-> You can use the F12 Developer Tools in Internet Explorer to trace the messages exchanged between the browser and the storage service when you're troubleshooting client-side JavaScript issues.
->
->
-
-These errors occur because the web browser implements the [same origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy) security restriction that prevents a web page from calling an API in a different domain from the domain the page comes from.
-
-To work around the JavaScript issue, you can configure Cross Origin Resource Sharing (CORS) for the storage service the client is accessing. For more information, see [Cross-Origin Resource Sharing (CORS) Support for Azure Storage Services](/rest/api/storageservices/Cross-Origin-Resource-Sharing--CORS--Support-for-the-Azure-Storage-Services).
-
-The following code sample shows how to configure your blob service to allow JavaScript running in the Contoso domain to access a blob in your blob storage service:
--
-#### <a name="network-failure"></a>Network Failure
-
-In some circumstances, lost network packets can lead to the storage service returning HTTP 404 messages to the client. For example, when your client application is deleting an entity from the table service you see the client throw a storage exception reporting an "HTTP 404 (Not Found)" status message from the table service. When you investigate the table in the table storage service, you see that the service did delete the entity as requested.
-
-The exception details in the client include the request ID (7e84f12d…) assigned by the table service for the request: you can use this information to locate the request details in the server-side storage logs by searching in the **request-id-header** column in the log file. You could also use the metrics to identify when failures such as this occur and then search the log files based on the time the metrics recorded this error. This log entry shows that the delete failed with an "HTTP (404) Client Other Error" status message. The same log entry also includes the request ID generated by the client in the **client-request-id** column (813ea74f…).
-
-The server-side log also includes another entry with the same **client-request-id** value (813ea74f…) for a successful delete operation for the same entity, and from the same client. This successful delete operation took place very shortly before the failed delete request.
-
-The most likely cause of this scenario is that the client sent a delete request for the entity to the table service, which succeeded, but didn't receive an acknowledgment from the server (perhaps due to a temporary network issue). The client then automatically retried the operation (using the same **client-request-id**), and this retry failed because the entity had already been deleted.
-
-If this problem occurs frequently, you should investigate why the client is failing to receive acknowledgments from the table service. If the problem is intermittent, you should trap the "HTTP (404) Not Found" error and log it in the client, but allow the client to continue.
-
-### <a name="the-client-is-receiving-409-messages"></a>The client is receiving HTTP 409 (Conflict) messages
-
-The following table shows an extract from the server-side log for two client operations: **DeleteIfExists** followed immediately by **CreateIfNotExists** using the same blob container name. Each client operation results in two requests sent to the server, first a **GetContainerProperties** request to check if the container exists, followed by the **DeleteContainer** or **CreateContainer** request.
-
-| Timestamp | Operation | Result | Container name | Client request ID |
-| | | | | |
-| 05:10:13.7167225 |GetContainerProperties |200 |mmcont |c9f52c89-… |
-| 05:10:13.8167325 |DeleteContainer |202 |mmcont |c9f52c89-… |
-| 05:10:13.8987407 |GetContainerProperties |404 |mmcont |bc881924-… |
-| 05:10:14.2147723 |CreateContainer |409 |mmcont |bc881924-… |
-
-The code in the client application deletes and then immediately recreates a blob container using the same name: the **CreateIfNotExists** method (Client request ID bc881924-…) eventually fails with the HTTP 409 (Conflict) error. When a client deletes blob containers, tables, or queues there's a brief period before the name becomes available again.
-
-The client application should use unique container names whenever it creates new containers if the delete/recreate pattern is common.
-
-### <a name="metrics-show-low-percent-success"></a>Metrics show low PercentSuccess or analytics log entries have operations with transaction status of ClientOtherErrors
-
-The **PercentSuccess** metric captures the percent of operations that were successful based on their HTTP Status Code. Operations with status codes of 2XX count as successful, whereas operations with status codes in 3XX, 4XX and 5XX ranges are counted as unsuccessful and lower the **PercentSuccess** metric value. In the server-side storage log files, these operations are recorded with a transaction status of **ClientOtherErrors**.
-
-It's important to note that these operations have completed successfully and therefore do not affect other metrics such as availability. Some examples of operations that execute successfully but that can result in unsuccessful HTTP status codes include:
--- **ResourceNotFound** (Not Found 404), for example from a GET request to a blob that doesn't exist.-- **ResourceAlreadyExists** (Conflict 409), for example from a **CreateIfNotExist** operation where the resource already exists.-- **ConditionNotMet** (Not Modified 304), for example from a conditional operation such as when a client sends an **ETag** value and an HTTP **If-None-Match** header to request an image only if it has been updated since the last operation.-
-You can find a list of common REST API error codes that the storage services return on the page [Common REST API Error Codes](/rest/api/storageservices/Common-REST-API-Error-Codes).
-
-### <a name="capacity-metrics-show-an-unexpected-increase"></a>Capacity metrics show an unexpected increase in storage capacity usage
-
-If you see sudden, unexpected changes in capacity usage in your storage account, you can investigate the reasons by first looking at your availability metrics; for example, an increase in the number of failed delete requests might lead to an increase in the amount of blob storage you're using as application-specific cleanup operations you might have expected to be freeing up space may not be working as expected (for example, because the SAS tokens used for freeing up space have expired).
-
-### <a name="your-issue-arises-from-using-the-storage-emulator"></a>Your issue arises from using the storage emulator for development or test
-
-You typically use the storage emulator during development and test to avoid the requirement for an Azure storage account. The common issues that can occur when you're using the storage emulator are:
--- [Feature "X" isn't working in the storage emulator]-- [Error "The value for one of the HTTP headers isn't in the correct format" when using the storage emulator]-- [Running the storage emulator requires administrative privileges]-
-#### <a name="feature-X-is-not-working"></a>Feature "X" isn't working in the storage emulator
-
-The storage emulator doesn't support all of the features of the Azure storage services such as the file service. For more information, see [Use the Azure Storage Emulator for Development and Testing](storage-use-emulator.md).
-
-For those features that the storage emulator doesn't support, use the Azure storage service in the cloud.
-
-#### <a name="error-HTTP-header-not-correct-format"></a>Error "The value for one of the HTTP headers isn't in the correct format" when using the storage emulator
-
-You're testing your application that uses the Storage Client Library against the local storage emulator and method calls such as **CreateIfNotExists** fail with the error message "The value for one of the HTTP headers isn't in the correct format." This indicates that the version of the storage emulator you're using doesn't support the version of the storage client library you're using. The Storage Client Library adds the header **x-ms-version** to all the requests it makes. If the storage emulator doesn't recognize the value in the **x-ms-version** header, it rejects the request.
-
-You can use the Storage Library Client logs to see the value of the **x-ms-version header** it's sending. You can also see the value of the **x-ms-version header** if you use Fiddler to trace the requests from your client application.
-
-This scenario typically occurs if you install and use the latest version of the Storage Client Library without updating the storage emulator. You should either install the latest version of the storage emulator, or use cloud storage instead of the emulator for development and test.
-
-#### <a name="storage-emulator-requires-administrative-privileges"></a>Running the storage emulator requires administrative privileges
-
-You're prompted for administrator credentials when you run the storage emulator. This only occurs when you're initializing the storage emulator for the first time. After you've initialized the storage emulator, you don't need administrative privileges to run it again.
-
-For more information, see [Use the Azure Storage Emulator for Development and Testing](storage-use-emulator.md). You can also initialize the storage emulator in Visual Studio, which will also require administrative privileges.
-
-### <a name="you-are-encountering-problems-installing-the-Windows-Azure-SDK"></a>You're encountering problems installing the Azure SDK for .NET
-
-When you try to install the SDK, it fails trying to install the storage emulator on your local machine. The installation log contains one of the following messages:
--- CAQuietExec: Error: Unable to access SQL instance-- CAQuietExec: Error: Unable to create database-
-The cause is an issue with existing LocalDB installation. By default, the storage emulator uses LocalDB to persist data when it simulates the Azure storage services. You can reset your LocalDB instance by running the following commands in a command-prompt window before trying to install the SDK.
-
-```
-sqllocaldb stop v11.0
-sqllocaldb delete v11.0
-delete %USERPROFILE%\WAStorageEmulatorDb3*.*
-sqllocaldb create v11.0
-```
-
-The **delete** command removes any old database files from previous installations of the storage emulator.
-
-### <a name="you-have-a-different-issue-with-a-storage-service"></a>You have a different issue with a storage service
-
-If the previous troubleshooting sections don't include the issue you're having with a storage service, you should adopt the following approach to diagnosing and troubleshooting your issue.
--- Check your metrics to see if there's any change from your expected base-line behavior. From the metrics, you may be able to determine whether the issue is transient or permanent, and which storage operations the issue is affecting.-- You can use the metrics information to help you search your server-side log data for more detailed information about any errors that are occurring. This information may help you troubleshoot and resolve the issue.-- If the information in the server-side logs isn't sufficient to troubleshoot the issue successfully, you can use the Storage Client Library client-side logs to investigate the behavior of your client application, and tools such as Fiddler, Wireshark to investigate your network.-
-For more information about using Fiddler, see "[Appendix 1: Using Fiddler to capture HTTP and HTTPS traffic]."
-
-For more information about using Wireshark, see "[Appendix 2: Using Wireshark to capture network traffic]."
-
-## <a name="appendices"></a>Appendices
-
-The appendices describe several tools that you may find useful when you're diagnosing and troubleshooting issues with Azure Storage (and other services). These tools are not part of Azure Storage and some are third-party products. As such, the tools discussed in these appendices are not covered by any support agreement you may have with Microsoft Azure or Azure Storage, and therefore as part of your evaluation process you should examine the licensing and support options available from the providers of these tools.
-
-### <a name="appendix-1"></a>Appendix 1: Using Fiddler to capture HTTP and HTTPS traffic
-
-[Fiddler](https://www.telerik.com/fiddler) is a useful tool for analyzing the HTTP and HTTPS traffic between your client application and the Azure storage service you're using.
-
-> [!NOTE]
-> Fiddler can decode HTTPS traffic; you should read the Fiddler documentation carefully to understand how it does this, and to understand the security implications.
->
->
-
-This appendix provides a brief walkthrough of how to configure Fiddler to capture traffic between the local machine where you have installed Fiddler and the Azure storage services.
-
-After you have launched Fiddler, it will begin capturing HTTP and HTTPS traffic on your local machine. The following are some useful commands for controlling Fiddler:
--- Stop and start capturing traffic. On the main menu, go to **File** and then click **Capture Traffic** to toggle capturing on and off.-- Save captured traffic data. On the main menu, go to **File**, click **Save**, and then click **All Sessions**: this enables you to save the traffic in a Session Archive file. You can reload a Session Archive later for analysis, or send it if requested to Microsoft support.-
-To limit the amount of traffic that Fiddler captures, you can use filters that you configure in the **Filters** tab. The following screenshot shows a filter that captures only traffic sent to the **contosoemaildist.table.core.windows.net** storage endpoint:
-
-![Screenshot that shows a filter that captures only traffic sent to the contosoemaildist.table.core.windows.net storage endpoint.][5]
-
-### <a name="appendix-2"></a>Appendix 2: Using Wireshark to capture network traffic
-
-[Wireshark](https://www.wireshark.org/) is a network protocol analyzer that enables you to view detailed packet information for a wide range of network protocols.
-
-The following procedure shows you how to capture detailed packet information for traffic from the local machine where you installed Wireshark to the table service in your Azure storage account.
-
-1. Launch Wireshark on your local machine.
-2. In the **Start** section, select the local network interface or interfaces that are connected to the internet.
-3. Click **Capture Options**.
-4. Add a filter to the **Capture Filter** textbox. For example, **host contosoemaildist.table.core.windows.net** will configure Wireshark to capture only packets sent to or from the table service endpoint in the **contosoemaildist** storage account. Check out the [complete list of Capture Filters](https://wiki.wireshark.org/CaptureFilters).
-
- ![Screenshot that shows how to add a filter to the Capture Filter textbox.][6]
-5. Click **Start**. Wireshark will now capture all the packets send to or from the table service endpoint as you use your client application on your local machine.
-6. When you have finished, on the main menu click **Capture** and then **Stop**.
-7. To save the captured data in a Wireshark Capture File, on the main menu click **File** and then **Save**.
-
-WireShark will highlight any errors that exist in the **packetlist** window. You can also use the **Expert Info** window (click **Analyze**, then **Expert Info**) to view a summary of errors and warnings.
-
-![Screenshot that shows the Expert Info window where you can view a summary of errors and warnings.][7]
-
-You can also choose to view the TCP data as the application layer sees it by right-clicking on the TCP data and selecting **Follow TCP Stream**. This is useful if you captured your dump without a capture filter. For more information, see [Following TCP Streams](https://www.wireshark.org/docs/wsug_html_chunked/ChAdvFollowTCPSection.html).
-
-![Screenshot that shows how to view the TCP data as the application layer sees it.][8]
-
-> [!NOTE]
-> For more information about using Wireshark, see the [Wireshark Users Guide](https://www.wireshark.org/docs/wsug_html_chunked).
->
->
-
-### <a name="appendix-4"></a>Appendix 4: Using Excel to view metrics and log data
-
-Many tools enable you to download the Storage Metrics data from Azure table storage in a delimited format that makes it easy to load the data into Excel for viewing and analysis. Storage Logging data from Azure Blob Storage is already in a delimited format that you can load into Excel. However, you'll need to add appropriate column headings based in the information at [Storage Analytics Log Format](/rest/api/storageservices/Storage-Analytics-Log-Format) and [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema).
-
-To import your Storage Logging data into Excel after you download it from blob storage:
--- On the **Data** menu, click **From Text**.-- Browse to the log file you want to view and click **Import**.-- On step 1 of the **Text Import Wizard**, select **Delimited**.-
-On step 1 of the **Text Import Wizard**, select **Semicolon** as the only delimiter and choose double-quote as the **Text qualifier**. Then click **Finish** and choose where to place the data in your workbook.
-
-### <a name="appendix-5"></a>Appendix 5: Monitoring with Application Insights for Azure DevOps
-
-You can also use the Application Insights feature for Azure DevOps as part of your performance and availability monitoring. This tool can:
--- Make sure your web service is available and responsive. Whether your app is a web site or a device app that uses a web service, it can test your URL every few minutes from locations around the world, and let you know if there's a problem.-- Quickly diagnose any performance issues or exceptions in your web service. Find out if CPU or other resources are being stretched, get stack traces from exceptions, and easily search through log traces. If the app's performance drops below acceptable limits, Microsoft can send you an email. You can monitor both .NET and Java web services.-
-You can find more information at [What is Application Insights](../../azure-monitor/app/app-insights-overview.md).
-
-## Next steps
-
-For more information about analytics in Azure Storage, see these resources:
--- [Monitor a storage account in the Azure portal](./manage-storage-analytics-logs.md)-- [Storage analytics](storage-analytics.md)-- [Storage analytics metrics](storage-analytics-metrics.md)-- [Storage analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema)-- [Storage analytics logs](storage-analytics-logging.md)-- [Storage analytics log format](/rest/api/storageservices/storage-analytics-log-format)-
-<!--Anchors-->
-[Introduction]: #introduction
-[How this guide is organized]: #how-this-guide-is-organized
-
-[Monitoring your storage service]: #monitoring-your-storage-service
-[Monitoring service health]: #monitoring-service-health
-[Monitoring capacity]: #monitoring-capacity
-[Monitoring availability]: #monitoring-availability
-[Monitoring performance]: #monitoring-performance
-
-[Diagnosing storage issues]: #diagnosing-storage-issues
-[Service health issues]: #service-health-issues
-[Performance issues]: #performance-issues
-[Diagnosing errors]: #diagnosing-errors
-[Storage emulator issues]: #storage-emulator-issues
-[Storage logging tools]: #storage-logging-tools
-[Using network logging tools]: #using-network-logging-tools
-
-[End-to-end tracing]: #end-to-end-tracing
-[Correlating log data]: #correlating-log-data
-[Client request ID]: #client-request-id
-[Server request ID]: #server-request-id
-[Timestamps]: #timestamps
-
-[Troubleshooting guidance]: #troubleshooting-guidance
-[Metrics show high AverageE2ELatency and low AverageServerLatency]: #metrics-show-high-AverageE2ELatency-and-low-AverageServerLatency
-[Metrics show low AverageE2ELatency and low AverageServerLatency but the client is experiencing high latency]: #metrics-show-low-AverageE2ELatency-and-low-AverageServerLatency
-[Metrics show high AverageServerLatency]: #metrics-show-high-AverageServerLatency
-[You're experiencing unexpected delays in message delivery on a queue]: #you-are-experiencing-unexpected-delays-in-message-delivery
-
-[Metrics show an increase in PercentThrottlingError]: #metrics-show-an-increase-in-PercentThrottlingError
-[Transient increase in PercentThrottlingError]: #transient-increase-in-PercentThrottlingError
-[Permanent increase in PercentThrottlingError error]: #permanent-increase-in-PercentThrottlingError
-[Metrics show an increase in PercentTimeoutError]: #metrics-show-an-increase-in-PercentTimeoutError
-[Metrics show an increase in PercentNetworkError]: #metrics-show-an-increase-in-PercentNetworkError
-
-[The client is receiving HTTP 403 (Forbidden) messages]: #the-client-is-receiving-403-messages
-[The client is receiving HTTP 404 (Not found) messages]: #the-client-is-receiving-404-messages
-[The client or another process previously deleted the object]: #client-previously-deleted-the-object
-[A Shared Access Signature (SAS) authorization issue]: #SAS-authorization-issue
-[Client-side JavaScript code doesn't have permission to access the object]: #JavaScript-code-does-not-have-permission
-[Network failure]: #network-failure
-[The client is receiving HTTP 409 (Conflict) messages]: #the-client-is-receiving-409-messages
-
-[Metrics show low PercentSuccess or analytics log entries have operations with transaction status of ClientOtherErrors]: #metrics-show-low-percent-success
-[Capacity metrics show an unexpected increase in storage capacity usage]: #capacity-metrics-show-an-unexpected-increase
-[Your issue arises from using the storage emulator for development or test]: #your-issue-arises-from-using-the-storage-emulator
-[Feature "X" is not working in the storage emulator]: #feature-X-is-not-working
-[Error "The value for one of the HTTP headers is not in the correct format" when using the storage emulator]: #error-HTTP-header-not-correct-format
-[Running the storage emulator requires administrative privileges]: #storage-emulator-requires-administrative-privileges
-[You're encountering problems installing the Azure SDK for .NET]: #you-are-encountering-problems-installing-the-Windows-Azure-SDK
-[You have a different issue with a storage service]: #you-have-a-different-issue-with-a-storage-service
-
-[Appendices]: #appendices
-[Appendix 1: Using Fiddler to capture HTTP and HTTPS traffic]: #appendix-1
-[Appendix 2: Using Wireshark to capture network traffic]: #appendix-2
-[Appendix 4: Using Excel to view metrics and log data]: #appendix-4
-[Appendix 5: Monitoring with Application Insights for Azure DevOps]: #appendix-5
-
-<!--Image references-->
-[1]: ./media/storage-monitoring-diagnosing-troubleshooting/overview.png
-[3]: ./media/storage-monitoring-diagnosing-troubleshooting/hour-minute-metrics.png
-[4]: ./media/storage-monitoring-diagnosing-troubleshooting/high-e2e-latency.png
-[5]: ./media/storage-monitoring-diagnosing-troubleshooting/fiddler-screenshot.png
-[6]: ./media/storage-monitoring-diagnosing-troubleshooting/wireshark-screenshot-1.png
-[7]: ./media/storage-monitoring-diagnosing-troubleshooting/wireshark-screenshot-2.png
-[8]: ./media/storage-monitoring-diagnosing-troubleshooting/wireshark-screenshot-3.png
-[9]: ./media/storage-monitoring-diagnosing-troubleshooting/mma-screenshot-1.png
-[10]: ./media/storage-monitoring-diagnosing-troubleshooting/mma-screenshot-2.png
storage Storage Use Azcopy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-troubleshoot.md
- Title: Troubleshoot problems with AzCopy (Azure Storage)
-description: Find workarounds to common issues with AzCopy v10.
--- Previously updated : 06/09/2022-----
-# Troubleshoot problems with AzCopy v10
-
-This article describes common issues that you might encounter while using AzCopy, helps you to identify the causes of those issues, and then suggests ways to resolve them.
-
-## Identifying problems
-
-You can determine whether a job succeeds by looking at the exit code.
-
-If the exit code is `0-success`, then the job completed successfully.
-
-If the exit code is `1-error`, then examine the log file. Once you understand the exact error message, then it becomes much easier to search for the right key words and figure out the solution. To learn more, see [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md).
-
-If the exit code is `2-panic`, then check the log file exists. If the file doesn't exist, file a bug or reach out to support.
-
-If the exit code is any other non-zero exit code, it may be an exit code from the system. For example, OOMKilled. Check your operating system documentation for special exit codes.
-
-## 403 errors
-
-It's common to encounter 403 errors. Sometimes they're benign and don't result in failed transfer. For example, in AzCopy logs, you might see that a HEAD request received 403 errors. Those errors appear when AzCopy checks whether a resource is public. In most cases, you can ignore those instances.
-
-In some cases 403 errors can result in a failed transfer. If this happens, other attempts to transfer files will likely fail until you resolve the issue. 403 errors can occur as a result of authentication and authorization issues. They can also occur when requests are blocked due to the storage account firewall configuration.
-
-### Authentication / Authorization issues
-
-403 errors that prevent data transfer occur because of issues with SAS tokens, role based access control (Azure RBAC) roles, and access control list (ACL) configurations.
-
-##### SAS tokens
-
-If you're using a shared access signature (SAS) token, verify the following:
--- The expiration and start times of the SAS token are appropriate.--- You selected all the necessary permissions for the token.--- You generated the token by using an official SDK or tool. Try Storage Explorer if you haven't already.-
-##### Azure RBAC
-
-If you're using role based access control (Azure RBAC) roles via the `azcopy login` command, verify that you have the appropriate Azure roles assigned to your identity (For example: the Storage Blob Data Contributor role).
-
-To learn more about Azure roles, see [Assign an Azure role for access to blob data](../blobs/assign-azure-role-data-access.md).
-
-##### ACLs
-
-If you're using access control lists (ACLs), verify that your identity appears in an ACL entry for each file or directory that you intend to access. Also, make sure that each ACL entry reflects the appropriate permission level.
-
-To learn more about ACLs and ACL entries, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control.md).
-
-To learn about how to incorporate Azure roles together with ACLs, and how system evaluates them to make authorization decisions, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md).
-
-### Firewall and private endpoint issues
-
-If the storage firewall configuration isn't configured to allow access from the machine where AzCopy is running, AzCopy operations will return an HTTP 403 error.
-
-##### Transferring data from or to a local machine
-
-If you're uploading or downloading data between a storage account and an on-premises machine, make sure that the machine that runs AzCopy is able to access either the source or destination storage account. You might have to use IP network rules in the firewall settings of either the source **or** destination accounts to allow access from the public IP address of the machine.
-
-##### Transferring data between storage accounts
-
-403 authorization errors can prevent you from transferring data between accounts by using the client machine where AzCopy is running.
-
-If you're copying data between storage accounts, make sure that the machine that runs AzCopy is able to access both the source **and** the destination account. You might have to use IP network rules in the firewall settings of both the source and destination accounts to allow access from the public IP address of the machine. The service will use the IP address of the AzCopy client machine to authorize the source to destination traffic. To learn how to add a public IP address to the firewall settings of a storage account, see [Grant access from an internet IP range](storage-network-security.md#grant-access-from-an-internet-ip-range).
-
-In case your VM doesn't or can't have a public IP address, consider using a private endpoint. See [Use private endpoints for Azure Storage](storage-private-endpoints.md).
-
-##### Using a Private link
-
-A [Private Link](../../private-link/private-link-overview.md) is at the virtual network (VNet) / subnet level. If you want AzCopy requests to go through a Private Link, then AzCopy must make those requests from a VM running in that VNet / subnet. For example, if you configure a Private Link in VNet1 / Subnet1 but the VM on which AzCopy runs is in VNet1 / Subnet2, then AzCopy requests won't use the Private Link and they're expected to fail.
-
-## Proxy-related errors
-
-If you encounter TCP errors such as `dial tcp: lookup proxy.x.x: no such host`, it means that your environment isn't configured to use the correct proxy, or you're using an advanced proxy that AzCopy doesn't recognize.
-
-You need to update the proxy settings to reflect the correct configurations. See [Configure proxy settings](storage-ref-azcopy-configuration-settings.md?toc=/azure/storage/blobs/toc.json#configure-proxy-settings).
-
-You can also bypass the proxy by setting the environment variable NO_PROXY="*".
-
-Here are the endpoints that AzCopy needs to use:
-
-| Log in endpoints | Azure Storage endpoints |
-|||
-| `login.microsoftonline.com (global Azure)` | `(blob \| file \| dfs).core.windows.net (global Azure)` |
-| `login.chinacloudapi.cn (Azure China)` | `(blob \| file \| dfs).core.chinacloudapi.cn (Azure China)` |
-| `login.microsoftonline.de (Azure Germany)` | `(blob \| file \| dfs).core.cloudapi.de (Azure Germany)` |
-| `login.microsoftonline.us (Azure US Government)` | `(blob \| file \| dfs).core.usgovcloudapi.net (Azure US Government)` |
-
-## x509: certificate signed by unknown authority
-
-This error is often related to the use of a proxy, which is using a Secure Sockets Layer (SSL) certificate that isn't trusted by the operating system. Verify your settings and make sure that the certificate is trusted at the operating system level.
-
-We recommend adding the certificate to your machine's root certificate store as that's where the trusted authorities are kept.
-
-## Unrecognized Parameters
-
-If you receive an error message stating that your parameters aren't recognized, make sure that you're using the correct version of AzCopy. AzCopy V8 and earlier versions are deprecated. [AzCopy V10](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json) is the current version, and it's a complete rewrite that doesn't share any syntax with the previous versions. Refer to this migration guide [here](https://github.com/Azure/azure-storage-azcopy/blob/main/MigrationGuideV8toV10.md).
-
-Also, make sure to utilize built-in help messages by using the `-h` switch with any command (For example: `azcopy copy -h`). See [Get command help](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#get-command-help). To view the same information online, see [azcopy copy](storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json).
-
-To help you understand commands, we provide an education tool located [here](https://azcopyvnextrelease.z22.web.core.windows.net/). This tool demonstrates the most popular AzCopy commands along with the most popular command flags. Our examples are [here](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#transfer-data). If you have any question, try searching through existing [GitHub issues](https://github.com/Azure/azure-storage-azcopy/issues) first to see if it was answered already.
-
-## Conditional access policy error
-
-You can receive the following error when you invoke the `azcopy login` command.
-
-"Failed to perform login command:
-failed to login with tenantID "common", Azure directory endpoint "https://login.microsoftonline.com", autorest/adal/devicetoken: -REDACTED- AADSTS50005: User tried to log in to a device from a platform (Unknown) that's currently not supported through Conditional Access policy. Supported device platforms are: iOS, Android, Mac, and Windows flavors.
-Trace ID: -REDACTED-
-Correlation ID: -REDACTED-
-Timestamp: 2021-01-05 01:58:28Z"
-
-This error means that your administrator has configured a conditional access policy that specifies what type of device you can log in from. AzCopy uses the device code flow, which can't guarantee that the machine where you're using the AzCopy tool is also where you're logging in.
-
-If your device is among the list of supported platforms, then you might be able to use Storage Explorer, which integrates AzCopy for all data transfers (it passes tokens to AzCopy via the secret store) but provides a login workflow that supports passing device information. AzCopy itself also supports managed identities and service principals, which could be used as an alternative.
-
-If your device isn't among the list of supported platforms, contact your administrator for help.
-
-## Server busy, network errors, timeouts
-
-If you see a large number of failed requests with the `503 Server Busy` status, then your requests are being throttled by the storage service. If you're seeing network errors or timeouts, you might be attempting to push through too much data across your infrastructure and that infrastructure is having difficulty handling it. In all cases, the workaround is similar.
-
-If you see a large file failing over and over again due to certain chunks failing each time, then try to limit the concurrent network connections or throughput limit depending on your specific case. We suggest that you first lower the performance drastically at first, observe whether it solved the initial problem, then ramp up performance again until an overall balance is achieved.
-
-For more information, see [Optimize the performance of AzCopy with Azure Storage](storage-use-azcopy-optimize.md)
-
-If you're copying data between accounts by using AzCopy, the quality and reliability of the network from where you run AzCopy might impact the overall performance. Event though data transfers from server to server, AzCopy does initiate calls for each file to copy between service endpoints.
-
-## Known constraints with AzCopy
--- Copying data from government clouds to commercial clouds isn't supported. However, copying data from commercial clouds to government clouds is supported.--- Asynchronous service-side copy isn't supported. AzCopy performs synchronous copy only. In other words, by the time the job finishes, the data has been moved.--- When copying to an Azure File share, if you forgot to specify the flag `--preserve-smb-permissions`, and you do not want to transfer the data again, then consider using Robocopy to bring over the permissions.--- Azure Functions has a different endpoint for MSI authentication, which AzCopy doesn't yet support.-
-## Known temporary issues
-
-There's a service issue impacting AzCopy 10.11+ which are using the [PutBlobFromURL API](/rest/api/storageservices/put-blob-from-url) to copy blobs smaller than the given block size (whose default is 8 MiB). If the user has any firewall (VNet / IP / PL / SE Policy) on the source account, the `PutBlobFromURL` API might mistakenly return the message `409 Copy source blob has been modified`. The workaround is to use AzCopy 10.10.
--- https://azcopyvnext.azureedge.net/release20210415/azcopy_darwin_amd64_10.10.0.zip-- https://azcopyvnext.azureedge.net/release20210415/azcopy_linux_amd64_10.10.0.tar.gz-- https://azcopyvnext.azureedge.net/release20210415/azcopy_windows_386_10.10.0.zip-- https://azcopyvnext.azureedge.net/release20210415/azcopy_windows_amd64_10.10.0.zip-
-## See also
--- [Get started with AzCopy](storage-use-azcopy-v10.md)-- [Find errors and resume jobs by using log and plan files in AzCopy](storage-use-azcopy-configure.md)
storage Troubleshoot Latency Storage Analytics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-latency-storage-analytics-logs.md
- Title: Troubleshoot latency using Storage Analytics logs
-description: Identify and troubleshoot latency issues using Azure Storage Analytic logs, and optimize the client application.
---- Previously updated : 10/21/2019---
-tags: ''
--
-# Troubleshoot latency using Storage Analytics logs
-
-Diagnosing and troubleshooting is a key skill for building and supporting client applications with Azure Storage.
-
-Because of the distributed nature of an Azure application, diagnosing and troubleshooting both errors and performance issues may be more complex than in traditional environments.
-
-The following steps demonstrate how to identify and troubleshoot latency issues using Azure Storage Analytic logs, and optimize the client application.
-
-## Recommended steps
-
-1. Download the [Storage Analytics logs](./manage-storage-analytics-logs.md#download-storage-logging-log-data).
-
-2. Use the following PowerShell script to convert the raw format logs into tabular format:
-
- ```powershell
- $Columns =
- ( "version-number",
- "request-start-time",
- "operation-type",
- "request-status",
- "http-status-code",
- "end-to-end-latency-in-ms",
- "server-latency-in-ms",
- "authentication-type",
- "requester-account-name",
- "owner-account-name",
- "service-type",
- "request-url",
- "requested-object-key",
- "request-id-header",
- "operation-count",
- "requester-ip-address",
- "request-version-header",
- "request-header-size",
- "request-packet-size",
- "response-header-size",
- "response-packet-size",
- "request-content-length",
- "request-md5",
- "server-md5",
- "etag-identifier",
- "last-modified-time",
- "conditions-used",
- "user-agent-header",
- "referrer-header",
- "client-request-id"
- )
-
- $logs = Import-Csv "REPLACE THIS WITH FILE PATH" -Delimiter ";" -Header $Columns
-
- $logs | Out-GridView -Title "Storage Analytic Log Parser"
- ```
-
-3. The script will launch a GUI window where you can filter the information by columns, as shown below.
-
- ![Storage Analytic Log Parser Window](media/troubleshoot-latency-storage-analytics-logs/storage-analytic-log-parser-window.png)
-
-4. Narrow down the log entries based on "operation-type", and look for the log entry created during the issue's time frame.
-
- ![Operation-type log entries](media/troubleshoot-latency-storage-analytics-logs/operation-type.png)
-
-5. During the time when the issue occurred, the following values are important:
-
- - Operation-type = GetBlob
- - request-status = SASNetworkError
- - End-to-End-Latency-In-Ms = 8453
- - Server-Latency-In-Ms = 391
-
- End-to-End Latency is calculated using the following equation:
-
- - End-to-End Latency = Server-Latency + Client Latency
-
- Calculate the Client Latency using the log entry:
-
- - Client Latency = End-to-End Latency ΓÇô Server-Latency
-
- Example: 8453 ΓÇô 391 = 8062ms
-
- The following table provides information about the high latency OperationType and RequestStatus results:
-
- | Blob Type |RequestStatus=<br>Success|RequestStatus=<br>(SAS)NetworkError|Recommendation|
- |||||
- |GetBlob|Yes|No|[**GetBlob Operation:** RequestStatus = Success](#getblob-operation-requeststatus--success)|
- |GetBlob|No|Yes|[**GetBlob Operation:** RequestStatus = (SAS)NetworkError](#getblob-operation-requeststatus--sasnetworkerror)|
- |PutBlob|Yes|No|[**Put Operation:** RequestStatus = Success](#put-operation-requeststatus--success)|
- |PutBlob|No|Yes|[**Put Operation:** RequestStatus = (SAS)NetworkError](#put-operation-requeststatus--sasnetworkerror)|
-
-## Status results
-
-### GetBlob Operation: RequestStatus = Success
-
-Check the following values as mentioned in step 5 of the "Recommended steps" section:
--- End-to-End Latency-- Server-Latency-- Client-Latency-
-In a **GetBlob Operation** with **RequestStatus = Success**, if **Max Time** is spent in **Client-Latency**, this indicates that Azure Storage is spending a large volume of time writing data to the client. This delay indicates a Client-Side Issue.
-
-**Recommendation:**
--- Investigate the code in your client.-- Use Wireshark, Microsoft Message Analyzer, or Tcping to investigate network connectivity issues from the client.-
-### GetBlob Operation: RequestStatus = (SAS)NetworkError
-
-Check the following values as mentioned in step 5 of the "Recommended steps" section:
--- End-to-End Latency-- Server-Latency-- Client-Latency-
-In a **GetBlob Operation** with **RequestStatus = (SAS)NetworkError**, if **Max Time** is spent in **Client-Latency**, the most common issue is that the client is disconnecting before a timeout expires in the storage service.
-
-**Recommendation:**
--- Investigate the code in your client to understand why and when the client disconnects from the storage service.-- Use Wireshark, Microsoft Message Analyzer, or Tcping to investigate network connectivity issues from the client.-
-### Put Operation: RequestStatus = Success
-
-Check the following values as mentioned in step 5 of the "Recommended steps" section:
--- End-to-End Latency-- Server-Latency-- Client-Latency-
-In a **Put Operation** with **RequestStatus = Success**, if **Max Time** is spent in **Client-Latency**, this indicates that the Client is taking more time to send data to the Azure Storage. This delay indicates a Client-Side Issue.
-
-**Recommendation:**
--- Investigate the code in your client.-- Use Wireshark, Microsoft Message Analyzer, or Tcping to investigate network connectivity issues from the client.-
-### Put Operation: RequestStatus = (SAS)NetworkError
-
-Check the following values as mentioned in step 5 of the "Recommended steps" section:
--- End-to-End Latency-- Server-Latency-- Client-Latency-
-In a **PutBlob Operation** with **RequestStatus = (SAS)NetworkError**, if **Max Time** is spent in **Client-Latency**, the most common issue is that the client is disconnecting before a timeout expires in the storage service.
-
-**Recommendation:**
--- Investigate the code in your client to understand why and when the client disconnects from the storage service.-- Use Wireshark, Microsoft Message Analyzer, or Tcping to investigate network connectivity issues from the client.
storage Troubleshoot Storage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-availability.md
- Title: Troubleshoot availability issues in Azure Storage accounts
-description: Identify and troubleshoot availability issues in Azure Storage accounts.
--- Previously updated : 05/23/2022------
-# Troubleshoot availability issues in Azure Storage accounts
-
-This article helps you investigate changes in the availability (such as number of failed requests). These changes in availability can often be identified by monitoring storage metrics in Azure Monitor. For general information about using metrics and logs in Azure Monitor, see
--- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)-- [Monitoring Azure Files](../files/storage-files-monitoring.md)-- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)-- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)-
-## Monitoring availability
-
-You should monitor the availability of the storage services in your storage account by monitoring the value of the **Availability** metric. The **Availability** metric contains a percentage value and is calculated by taking the total billable requests value and dividing it by the number of applicable requests, including those requests that produced unexpected errors.
-
-Any value less than 100% indicates that some storage requests are failing. You can see why they are failing by examining the **ResponseType** dimension for error types such as **ServerTimeoutError**. You should expect to see **Availability** fall temporarily below 100% for reasons such as transient server timeouts while the service moves partitions to better load-balance request; the retry logic in your client application should handle such intermittent conditions.
-
-You can use features in Azure Monitor to alert you if **Availability** for a service falls below a threshold that you specify.
-
-## Metrics show an increase in throttling errors
-
-Throttling errors occur when you exceed the scalability targets of a storage service. The storage service throttles to ensure that no single client or tenant can use the service at the expense of others. For more information, see [Scalability and performance targets for standard storage accounts](scalability-targets-standard-account.md) for details on scalability targets for storage accounts and performance targets for partitions within storage accounts.
-
-If the **ClientThrottlingError** or **ServerBusyError** value of the **ResponseType** dimension shows an increase in the percentage of requests that are failing with a throttling error, you need to investigate one of two scenarios:
--- Transient increase in PercentThrottlingError-- Permanent increase in PercentThrottlingError error-
-An increase in throttling errors often occurs at the same time as an increase in the number of storage requests, or when you are initially load testing your application. This may also manifest itself in the client as "503 Server Busy" or "500 Operation Timeout" HTTP status messages from storage operations.
-
-### Transient increase in throttling errors
-
-If you are seeing spikes in throttling errors that coincide with periods of high activity for the application, you implement an exponential (not linear) back-off strategy for retries in your client. Back-off retries reduce the immediate load on the partition and help your application to smooth out spikes in traffic. For more information about how to implement retry policies using the Storage Client Library, see the [RetryOptions.MaxRetries](/dotnet/api/microsoft.azure.storage.retrypolicies) property.
-
-> [!NOTE]
-> You may also see spikes in throttling errors that do not coincide with periods of high activity for the application: the most likely cause here is the storage service moving partitions to improve load balancing.
-
-### Permanent increase in throttling errors
-
-If you are seeing a consistently high value for throttling errors following a permanent increase in your transaction volumes, or when you are performing your initial load tests on your application, then you need to evaluate how your application is using storage partitions and whether it is approaching the scalability targets for a storage account. For example, if you are seeing throttling errors on a queue (which counts as a single partition), then you should consider using additional queues to spread the transactions across multiple partitions. If you are seeing throttling errors on a table, you need to consider using a different partitioning scheme to spread your transactions across multiple partitions by using a wider range of partition key values. One common cause of this issue is the prepend/append anti-pattern where you select the date as the partition key and then all data on a particular day is written to one partition: under load, this can result in a write bottleneck. Either consider a different partitioning design or evaluate whether using blob storage might be a better solution. Also check whether throttling is occurring as a result of spikes in your traffic and investigate ways of smoothing your pattern of requests.
-
-If you distribute your transactions across multiple partitions, you must still be aware of the scalability limits set for the storage account. For example, if you used ten queues each processing the maximum of 2,000 1KB messages per second, you will be at the overall limit of 20,000 messages per second for the storage account. If you need to process more than 20,000 entities per second, you should consider using multiple storage accounts. You should also bear in mind that the size of your requests and entities has an impact on when the storage service throttles your clients: if you have larger requests and entities, you may be throttled sooner.
-
-Inefficient query design can also cause you to hit the scalability limits for table partitions. For example, a query with a filter that only selects one percent of the entities in a partition but that scans all the entities in a partition will need to access each entity. Every entity read will count towards the total number of transactions in that partition; therefore, you can easily reach the scalability targets.
-
-> [!NOTE]
-> Your performance testing should reveal any inefficient query designs in your application.
-
-## Metrics show an increase in timeout errors
-
-Timeout errors occur when the **ResponseType** dimension is equal to **ServerTimeoutError** or **ClientTimeout**.
-
-Your metrics show an increase in timeout errors for one of your storage services. At the same time, the client receives a high volume of "500 Operation Timeout" HTTP status messages from storage operations.
-
-> [!NOTE]
-> You may see timeout errors temporarily as the storage service load balances requests by moving a partition to a new server.
-
-The server timeouts (**ServerTimeOutError**) are caused by an error on the server. The client timeouts (**ClientTimeout**) happen because an operation on the server has exceeded the timeout specified by the client; for example, a client using the Storage Client Library can set a timeout for an operation.
-
-Server timeouts indicate a problem with the storage service that requires further investigation. You can use metrics to see if you are hitting the scalability limits for the service and to identify any spikes in traffic that might be causing this problem. If the problem is intermittent, it may be due to load-balancing activity in the service. If the problem is persistent and is not caused by your application hitting the scalability limits of the service, you should raise a support issue. For client timeouts, you must decide if the timeout is set to an appropriate value in the client and either change the timeout value set in the client or investigate how you can improve the performance of the operations in the storage service, for example by optimizing your table queries or reducing the size of your messages.
-
-## Metrics show an increase in network errors
-
-Network errors occur when the **ResponseType** dimension is equal to **NetworkError**. These occur when a storage service detects a network error when the client makes a storage request.
-
-The most common cause of this error is a client disconnecting before a timeout expires in the storage service. Investigate the code in your client to understand why and when the client disconnects from the storage service. You can also use third-party network analysis tools to investigate network connectivity issues from the client.
-
-## See also
--- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json)-- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Troubleshoot Storage Client Application Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-client-application-errors.md
- Title: Troubleshoot client application errors in Azure Storage accounts
-description: Identify and troubleshoot errors with client applications that connect to Azure Storage accounts.
--- Previously updated : 05/23/2022------
-# Troubleshoot client application errors in Azure Storage accounts
-
-This article helps you investigate client application errors by using metrics, [client side logs](/rest/api/storageservices/Client-side-Logging-with-the-.NET-Storage-Client-Library), and resource logs in Azure Monitor.
-
-### Diagnosing errors
-
-Users of your application may notify you of errors reported by the client application. Azure Monitor also records counts of different response types (**ResponseType** dimensions) from your storage services such as **NetworkError**, **ClientTimeoutError**, or **AuthorizationError**. While Azure Monitor only records counts of different error types, you can obtain more detail about individual requests by examining server-side, client-side, and network logs. Typically, the HTTP status code returned by the storage service will give an indication of why the request failed.
-
-> [!NOTE]
-> Remember that you should expect to see some intermittent errors: for example, errors due to transient network conditions, or application errors.
-
-The following resources are useful for understanding storage-related status and error codes:
--- [Common REST API Error Codes](/rest/api/storageservices/Common-REST-API-Error-Codes)-- [Blob Service Error Codes](/rest/api/storageservices/Blob-Service-Error-Codes)-- [Queue Service Error Codes](/rest/api/storageservices/Queue-Service-Error-Codes)-- [Table Service Error Codes](/rest/api/storageservices/Table-Service-Error-Codes)-- [File Service Error Codes](/rest/api/storageservices/File-Service-Error-Codes)-
-## The client is receiving HTTP 403 (Forbidden) messages
-
-If your client application is throwing HTTP 403 (Forbidden) errors, a likely cause is that the client is using an expired Shared Access Signature (SAS) when it sends a storage request (although other possible causes include clock skew, invalid keys, and empty headers).
-
-The Storage Client Library for .NET enables you to collect client-side log data that relates to storage operations performed by your application. For more information, see [Client-side Logging with the .NET Storage Client Library](/rest/api/storageservices/Client-side-Logging-with-the-.NET-Storage-Client-Library).
-
-The following table shows a sample from the client-side log generated by the Storage Client Library that illustrates this issue occurring:
-
-| Source | Verbosity | Verbosity | Client request ID | Operation text |
-| | | | | |
-| Microsoft.Azure.Storage |Information |3 |85d077ab-… |`Starting operation with location Primary per location mode PrimaryOnly.` |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`Starting synchronous request to <https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests#Synchronous_request>` |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`Waiting for response.` |
-| Microsoft.Azure.Storage |Warning |2 |85d077ab -… |`Exception thrown while waiting for response: The remote server returned an error: (403) Forbidden.` |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`Response received. Status code = 403, Request ID = 9d67c64a-64ed-4b0d-9515-3b14bbcdc63d, Content-MD5 = , ETag = .` |
-| Microsoft.Azure.Storage |Warning |2 |85d077ab -… |`Exception thrown during the operation: The remote server returned an error: (403) Forbidden..` |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`Checking if the operation should be retried. Retry count = 0, HTTP status code = 403, Exception = The remote server returned an error: (403) Forbidden..` |
-| Microsoft.Azure.Storage |Information |3 |85d077ab -… |`The next location has been set to Primary, based on the location mode.` |
-| Microsoft.Azure.Storage |Error |1 |85d077ab -… |`Retry policy did not allow for a retry. Failing with The remote server returned an error: (403) Forbidden.` |
-
-In this scenario, you should investigate why the SAS token is expiring before the client sends the token to the server:
--- Typically, you should not set a start time when you create a SAS for a client to use immediately. If there are small clock differences between the host generating the SAS using the current time and the storage service, then it is possible for the storage service to receive a SAS that is not yet valid.--- Do not set a very short expiry time on a SAS. Again, small clock differences between the host generating the SAS and the storage service can lead to a SAS apparently expiring earlier than anticipated.--- Does the version parameter in the SAS key (for example **sv=2015-04-05**) match the version of the Storage Client Library you are using? We recommend that you always use the latest version of the storage client library.--- If you regenerate your storage access keys, any existing SAS tokens may be invalidated. This issue may arise if you generate SAS tokens with a long expiry time for client applications to cache.-
-If you are using the Storage Client Library to generate SAS tokens, then it is easy to build a valid token. However, if you are using the Storage REST API and constructing the SAS tokens by hand, see [Delegating Access with a Shared Access Signature](/rest/api/storageservices/delegate-access-with-shared-access-signature).
-
-## The client is receiving HTTP 404 (Not found) messages
-
-If the client application receives an HTTP 404 (Not found) message from the server, this implies that the object the client was attempting to use (such as an entity, table, blob, container, or queue) does not exist in the storage service. There are a number of possible reasons for this, such as:
--- The client or another process previously deleted the object--- A Shared Access Signature (SAS) authorization issue--- Client-side JavaScript code does not have permission to access the object--- Network failure-
-### The client or another process previously deleted the object
-
-In scenarios where the client is attempting to read, update, or delete data in a storage service it is usually easy to identify in the storage resource logs a previous operation that deleted the object in question from the storage service. Often, the log data shows that another user or process deleted the object. In the Azure Monitor logs (server-side) show when a client deleted an object.
-
-In the scenario where a client is attempting to insert an object, it may not be immediately obvious why this results in an HTTP 404 (Not found) response given that the client is creating a new object. However, if the client is creating a blob it must be able to find the blob container, if the client is creating a message it must be able to find a queue, and if the client is adding a row it must be able to find the table.
-
-You can use the client-side log from the Storage Client Library to gain a more detailed understanding of when the client sends specific requests to the storage service.
-
-The following client-side log generated by the Storage Client library illustrates the problem when the client cannot find the container for the blob it is creating. This log includes details of the following storage operations:
-
-| Request ID | Operation |
-| | |
-| 07b26a5d-... |**DeleteIfExists** method to delete the blob container. Note that this operation includes a **HEAD** request to check for the existence of the container. |
-| e2d06d78… |**CreateIfNotExists** method to create the blob container. Note that this operation includes a **HEAD** request that checks for the existence of the container. The **HEAD** returns a 404 message but continues. |
-| de8b1c3c-... |**UploadFromStream** method to create the blob. The **PUT** request fails with a 404 message |
-
-Log entries:
-
-| Request ID | Operation Text |
-| | |
-| 07b26a5d-... |`Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer`.` |
-| 07b26a5d-... |`StringToSign = HEAD............x-ms-client-request-id:07b26a5d-....x-ms-date:Tue, 03 Jun 2014 10:33:11 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container.` |
-| 07b26a5d-... |`Waiting for response.` |
-| 07b26a5d-... |`Response received. Status code = 200, Request ID = eeead849-...Content-MD5 = , ETag = &quot;0x8D14D2DC63D059B&quot;.` |
-| 07b26a5d-... |`Response headers were processed successfully, proceeding with the rest of the operation.` |
-| 07b26a5d-... |`Downloading response body.` |
-| 07b26a5d-... |`Operation completed successfully.` |
-| 07b26a5d-... |`Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer`.` |
-| 07b26a5d-... |`StringToSign = DELETE............x-ms-client-request-id:07b26a5d-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container.` |
-| 07b26a5d-... |`Waiting for response.` |
-| 07b26a5d-... |`Response received. Status code = 202, Request ID = 6ab2a4cf-..., Content-MD5 = , ETag = .` |
-| 07b26a5d-... |`Response headers were processed successfully, proceeding with the rest of the operation.` |
-| 07b26a5d-... |`Downloading response body.` |
-| 07b26a5d-... |`Operation completed successfully.` |
-| e2d06d78-... |`Starting asynchronous request to https://domemaildist.blob.core.windows.net/azuremmblobcontainer`.</td> |
-| e2d06d78-... |`StringToSign = HEAD............x-ms-client-request-id:e2d06d78-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container.` |
-| e2d06d78-... |`Waiting for response.` |
-| de8b1c3c-... |`Starting synchronous request to `https://domemaildist.blob.core.windows.net/azuremmblobcontainer/blobCreated.txt`.` |
-| de8b1c3c-... |`StringToSign = PUT...64.qCmF+TQLPhq/YYK50mP9ZQ==........x-ms-blob-type:BlockBlob.x-ms-client-request-id:de8b1c3c-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer/blobCreated.txt.` |
-| de8b1c3c-... |`Preparing to write request data.` |
-| e2d06d78-... |`Exception thrown while waiting for response: The remote server returned an error: (404) Not Found..` |
-| e2d06d78-... |`Response received. Status code = 404, Request ID = 353ae3bc-..., Content-MD5 = , ETag = .` |
-| e2d06d78-... |`Response headers were processed successfully, proceeding with the rest of the operation.` |
-| e2d06d78-... |`Downloading response body.` |
-| e2d06d78-... |`Operation completed successfully.` |
-| e2d06d78-... |`Starting asynchronous request to https://domemaildist.blob.core.windows.net/azuremmblobcontainer.` |
-| e2d06d78-... |`StringToSign = PUT...0.........x-ms-client-request-id:e2d06d78-....x-ms-date:Tue, 03 Jun 2014 10:33:12 GMT.x-ms-version:2014-02-14./domemaildist/azuremmblobcontainer.restype:container.` |
-| e2d06d78-... |`Waiting for response.` |
-| de8b1c3c-... |`Writing request data.` |
-| de8b1c3c-... |`Waiting for response.` |
-| e2d06d78-... |`Exception thrown while waiting for response: The remote server returned an error: (409) Conflict..` |
-| e2d06d78-... |`Response received. Status code = 409, Request ID = c27da20e-..., Content-MD5 = , ETag = .` |
-| e2d06d78-... |`Downloading error response body.` |
-| de8b1c3c-... |`Exception thrown while waiting for response: The remote server returned an error: (404) Not Found..` |
-| de8b1c3c-... |`Response received. Status code = 404, Request ID = 0eaeab3e-..., Content-MD5 = , ETag = .` |
-| de8b1c3c-... |`Exception thrown during the operation: The remote server returned an error: (404) Not Found..` |
-| de8b1c3c-... |`Retry policy did not allow for a retry. Failing with The remote server returned an error: (404) Not Found..` |
-| e2d06d78-... |`Retry policy did not allow for a retry. Failing with The remote server returned an error: (409) Conflict..` |
-
-In this example, the log shows that the client is interleaving requests from the **CreateIfNotExists** method (request ID e2d06d78…) with the requests from the **UploadFromStream** method (de8b1c3c-...). This interleaving happens because the client application is invoking these methods asynchronously. Modify the asynchronous code in the client to ensure that it creates the container before attempting to upload any data to a blob in that container. Ideally, you should create all your containers in advance.
-
-### A Shared Access Signature (SAS) authorization issue
-
-If the client application attempts to use a SAS key that does not include the necessary permissions for the operation, the storage service returns an HTTP 404 (Not found) message to the client. At the same time, in Azure Monitor metrics, you will also see an **AuthorizationError** for the **ResponseType** dimension.
-
-Investigate why your client application is attempting to perform an operation for which it has not been granted permissions.
-
-### Client-side JavaScript code does not have permission to access the object
-
-If you are using a JavaScript client and the storage service is returning HTTP 404 messages, you check for the following JavaScript errors in the browser:
-
-```
-SEC7120: Origin http://localhost:56309 not found in Access-Control-Allow-Origin header.
-SCRIPT7002: XMLHttpRequest: Network Error 0x80070005, Access is denied.
-```
-
-> [!NOTE]
-> You can use the F12 Developer Tools in Internet Explorer to trace the messages exchanged between the browser and the storage service when you are troubleshooting client-side JavaScript issues.
-
-These errors occur because the web browser implements the [same origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy) security restriction that prevents a web page from calling an API in a different domain from the domain the page comes from.
-
-To work around the JavaScript issue, you can configure Cross Origin Resource Sharing (CORS) for the storage service the client is accessing. For more information, see [Cross-Origin Resource Sharing (CORS) Support for Azure Storage Services](/rest/api/storageservices/Cross-Origin-Resource-Sharing--CORS--Support-for-the-Azure-Storage-Services).
-
-The following code sample shows how to configure your blob service to allow JavaScript running in the Contoso domain to access a blob in your blob storage service:
-
-#### [.NET v12 SDK](#tab/dotnet)
----
-### Network Failure
-
-In some circumstances, lost network packets can lead to the storage service returning HTTP 404 messages to the client. For example, when your client application is deleting an entity from the table service you see the client throw a storage exception reporting an "HTTP 404 (Not Found)" status message from the table service. When you investigate the table in the table storage service, you see that the service did delete the entity as requested.
-
-The exception details in the client include the request ID (7e84f12d…) assigned by the table service for the request: you can use this information to locate the request details in the storage resource logs in Azure Monitor by searching in [Fields that describe how the operation was authenticated](../blobs/monitor-blob-storage-reference.md) of log entries. You could also use the metrics to identify when failures such as this occur and then search the log files based on the time the metrics recorded this error. This log entry shows that the delete failed with an "HTTP (404) Client Other Error" status message. The same log entry also includes the request ID generated by the client in the **client-request-id** column (813ea74f…).
-
-The server-side log also includes another entry with the same **client-request-id** value (813ea74f…) for a successful delete operation for the same entity, and from the same client. This successful delete operation took place very shortly before the failed delete request.
-
-The most likely cause of this scenario is that the client sent a delete request for the entity to the table service, which succeeded, but did not receive an acknowledgment from the server (perhaps due to a temporary network issue). The client then automatically retried the operation (using the same **client-request-id**), and this retry failed because the entity had already been deleted.
-
-If this problem occurs frequently, you should investigate why the client is failing to receive acknowledgments from the table service. If the problem is intermittent, you should trap the "HTTP (404) Not Found" error and log it in the client, but allow the client to continue.
-
-## The client is receiving HTTP 409 (Conflict) messages
-
-When a client deletes blob containers, tables, or queues there is a brief period before the name becomes available again. If the code in your client application deletes and then immediately recreates a blob container using the same name, the **CreateIfNotExists** method eventually fails with the HTTP 409 (Conflict) error.
-
-The client application should use unique container names whenever it creates new containers if the delete/recreate pattern is common.
-
-## Metrics show low PercentSuccess or analytics log entries have operations with transaction status of ClientOtherErrors
-
-A **ResponseType** dimension equal to a value of **Success** captures the percent of operations that were successful based on their HTTP Status Code. Operations with status codes of 2XX count as successful, whereas operations with status codes in 3XX, 4XX and 5XX ranges are counted as unsuccessful and lower the Success metric value. In storage resource logs, these operations are recorded with a transaction status of **ClientOtherError**.
-
-It is important to note that these operations have completed successfully and therefore do not affect other metrics such as availability. Some examples of operations that execute successfully but that can result in unsuccessful HTTP status codes include:
--- **ResourceNotFound** (Not Found 404), for example from a GET request to a blob that does not exist.-- **ResourceAlreadyExists** (Conflict 409), for example from a **CreateIfNotExist** operation where the resource already exists.-- **ConditionNotMet** (Not Modified 304), for example from a conditional operation such as when a client sends an **ETag** value and an HTTP **If-None-Match** header to request an image only if it has been updated since the last operation.-
-You can find a list of common REST API error codes that the storage services return on the page [Common REST API Error Codes](/rest/api/storageservices/Common-REST-API-Error-Codes).
--
-## See also
--- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)-- [Monitoring Azure Files](../files/storage-files-monitoring.md)-- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)-- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)-- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json)-- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage Troubleshoot Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/troubleshoot-storage-performance.md
- Title: Troubleshoot performance issues in Azure Storage accounts
-description: Identify and troubleshoot performance issues in Azure Storage accounts.
--- Previously updated : 05/23/2022------
-# Troubleshoot performance in Azure Storage accounts
-
-This article helps you investigate unexpected changes in behavior (such as slower than usual response times). These changes in behavior can often be identified by monitoring storage metrics in Azure Monitor. For general information about using metrics and logs in Azure Monitor, see
--- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)-- [Monitoring Azure Files](../files/storage-files-monitoring.md)-- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)-- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)-
-## Monitoring performance
-
-To monitor the performance of the storage services, you can use the following metrics.
--- The **SuccessE2ELatency** and **SuccessServerLatency** metrics show the average time the storage service or API operation type is taking to process requests. **SuccessE2ELatency** is a measure of end-to-end latency that includes the time taken to read the request and send the response in addition to the time taken to process the request (therefore includes network latency once the request reaches the storage service); **SuccessServerLatency** is a measure of just the processing time and therefore excludes any network latency related to communicating with the client. --- The **Egress** and **Ingress** metrics show the total amount of data, in bytes, coming in to and going out of your storage service or through a specific API operation type.--- The **Transactions** metric shows the total number of requests that the storage service of API operation is receiving. **Transactions** is the total number of requests that the storage service receives.-
-You can monitor for unexpected changes in any of these values. These changes could indicate an issue that requires further investigation.
-
-In the [Azure portal](https://portal.azure.com), you can add alert rules which notify you when any of the performance metrics for this service fall below or exceed a threshold that you specify.
-
-## Diagnose performance issues
-
-The performance of an application can be subjective, especially from a user perspective. Therefore, it is important to have baseline metrics available to help you identify where there might be a performance issue. Many factors might affect the performance of an Azure storage service from the client application perspective. These factors might operate in the storage service, in the client, or in the network infrastructure; therefore it is important to have a strategy for identifying the origin of the performance issue.
-
-After you have identified the likely location of the cause of the performance issue from the metrics, you can then use the log files to find detailed information to diagnose and troubleshoot the problem further.
-
-## Metrics show high SuccessE2ELatency and low SuccessServerLatency
-
-In some cases, you might find that **SuccessE2ELatency** is significantly higher than the **SuccessServerLatency**. The storage service only calculates the metric **SuccessE2ELatency** for successful requests and, unlike **SuccessServerLatency**, includes the time the client takes to send the data and receive acknowledgment from the storage service. Therefore, a difference between **SuccessE2ELatency** and **SuccessServerLatency** could be either due to the client application being slow to respond, or due to conditions on the network.
-
-> [!NOTE]
-> You can also view **E2ELatency** and **ServerLatency** for individual storage operations in the Storage Logging log data.
-
-### Investigating client performance issues
-
-Possible reasons for the client responding slowly include having a limited number of available connections or threads, or being low on resources such as CPU, memory or network bandwidth. You may be able to resolve the issue by modifying the client code to be more efficient (for example by using asynchronous calls to the storage service), or by using a larger Virtual Machine (with more cores and more memory).
-
-For the table and queue services, the Nagle algorithm can also cause high **SuccessE2ELatency** as compared to **SuccessServerLatency**: for more information, see the post [Nagle's Algorithm is Not Friendly towards Small Requests](/archive/blogs/windowsazurestorage/nagles-algorithm-is-not-friendly-towards-small-requests). You can disable the Nagle algorithm in code by using the **ServicePointManager** class in the **System.Net** namespace. You should do this before you make any calls to the table or queue services in your application since this does not affect connections that are already open. The following example comes from the **Application_Start** method in a worker role.
--
-You should check the client-side logs to see how many requests your client application is submitting, and check for general .NET related performance bottlenecks in your client such as CPU, .NET garbage collection, network utilization, or memory. As a starting point for troubleshooting .NET client applications, see [Debugging, Tracing, and Profiling](/dotnet/framework/debug-trace-profile/).
-
-### Investigating network latency issues
-
-Typically, high end-to-end latency caused by the network is due to transient conditions. You can investigate both transient and persistent network issues such as dropped packets by using tools such as Wireshark.
-
-## Metrics show low SuccessE2ELatency and low SuccessServerLatency but the client is experiencing high latency
-
-In this scenario, the most likely cause is a delay in the storage request reaching the storage service. You should investigate why requests from the client are not making it through to the blob service.
-
-One possible reason for the client delaying sending requests is that there are a limited number of available connections or threads.
-
-Also check whether the client is performing multiple retries, and investigate the reason if it is. To determine whether the client is performing multiple retries, you can:
--- Examine logs. If multiple retries are happening, you will see multiple operations with the same client request IDs.--- Examine the client logs. Verbose logging will indicate that a retry has occurred.--- Debug your code, and check the properties of the **OperationContext** object associated with the request. If the operation has retried, the **RequestResults** property will include multiple unique requests. You can also check the start and end times for each request.-
-If there are no issues in the client, you should investigate potential network issues such as packet loss. You can use tools such as Wireshark to investigate network issues.
-
-## Metrics show high SuccessServerLatency
-
-In the case of high **SuccessServerLatency** for blob download requests, you should use the Storage logs to see if there are repeated requests for the same blob (or set of blobs). For blob upload requests, you should investigate what block size the client is using (for example, blocks less than 64 K in size can result in overheads unless the reads are also in less than 64 K chunks), and if multiple clients are uploading blocks to the same blob in parallel. You should also check the per-minute metrics for spikes in the number of requests that result in exceeding the per second scalability targets.
-
-If you are seeing high **SuccessServerLatency** for blob download requests when there are repeated requests the same blob or set of blobs, then you should consider caching these blobs using Azure Cache or the Azure Content Delivery Network (CDN). For upload requests, you can improve the throughput by using a larger block size. For queries to tables, it is also possible to implement client-side caching on clients that perform the same query operations and where the data doesn't change frequently.
-
-High **SuccessServerLatency** values can also be a symptom of poorly designed tables or queries that result in scan operations or that follow the append/prepend anti-pattern.
-
-> [!NOTE]
-> You can find a comprehensive checklist performance checklist here: [Microsoft Azure Storage Performance and Scalability Checklist](../blobs/storage-performance-checklist.md).
-
-## You are experiencing unexpected delays in message delivery on a queue
-
-If you are experiencing a delay between the time an application adds a message to a queue and the time it becomes available to read from the queue, then you should take the following steps to diagnose the issue:
--- Verify the application is successfully adding the messages to the queue. Check that the application is not retrying the **AddMessage** method several times before succeeding. --- Verify there is no clock skew between the worker role that adds the message to the queue and the worker role that reads the message from the queue that makes it appear as if there is a delay in processing.--- Check if the worker role that reads the messages from the queue is failing. If a queue client calls the **GetMessage** method but fails to respond with an acknowledgment, the message will remain invisible on the queue until the **invisibilityTimeout** period expires. At this point, the message becomes available for processing again.--- Check if the queue length is growing over time. This can occur if you do not have sufficient workers available to process all of the messages that other workers are placing on the queue. Also check the metrics to see if delete requests are failing and the dequeue count on messages, which might indicate repeated failed attempts to delete the message.--- Examine the Storage logs for any queue operations that have higher than expected **E2ELatency** and **ServerLatency** values over a longer period of time than usual.-
-## See also
--- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json)-- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)-- [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/)
storage File Sync Troubleshoot Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md
If files fail to tier to Azure Files:
| 0x80c80261 | -2134375839 | ECS_E_GHOSTING_MIN_FILE_SIZE | The file failed to tier because the file size is less than the supported size. | The minimum supported file size is based on the file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size is 8 KiB. | | 0x80c83007 | -2134364153 | ECS_E_STORAGE_ERROR | The file failed to tier due to an Azure storage issue. | If the error persists, open a support request. | | 0x800703e3 | -2147023901 | ERROR_OPERATION_ABORTED | The file failed to tier because it was recalled at the same time. | No action required. The file will be tiered when the recall completes and the file is no longer in use. |
-| 0x80c80264 | -2134375836 | ECS_E_GHOSTING_FILE_NOT_SYNCED | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
+| 0x80c80264 | -2134375836 | ECS_E_GHOSTING_FILE_NOT_SYNCED | The file failed to tier because it hasn't synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
| 0x80070001 | -2147942401 | ERROR_INVALID_FUNCTION | The file failed to tier because the cloud tiering filter driver (storagesync.sys) isn't running. | To resolve this issue, open an elevated command prompt and run the following command: `fltmc load storagesync`<br>If the Azure File Sync filter driver fails to load when running the `fltmc` command, uninstall the Azure File Sync agent, restart the server, and reinstall the Azure File Sync agent. | | 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to tier due to insufficient disk space on the volume where the server endpoint is located. | To resolve this issue, free at least 100 MiB of disk space on the volume where the server endpoint is located. |
-| 0x80070490 | -2147023728 | ERROR_NOT_FOUND | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
+| 0x80070490 | -2147023728 | ERROR_NOT_FOUND | The file failed to tier because it hasn't synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
| 0x80c80262 | -2134375838 | ECS_E_GHOSTING_UNSUPPORTED_RP | The file failed to tier because it's an unsupported reparse point. | If the file is a Data Deduplication reparse point, follow the steps in the [planning guide](file-sync-planning.md#data-deduplication) to enable Data Deduplication support. Files with reparse points other than Data Deduplication aren't supported and won't be tiered. |
-| 0x80c83052 | -2134364078 | ECS_E_CREATE_SV_STREAM_ID_MISMATCH | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. |
-| 0x80c80269 | -2134375831 | ECS_E_GHOSTING_REPLICA_NOT_FOUND | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
+| 0x80c83052 | -2134364078 | ECS_E_CREATE_SV_STREAM_ID_<br>MISMATCH | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. |
+| 0x80c80269 | -2134375831 | ECS_E_GHOSTING_REPLICA_NOT_<br>FOUND | The file failed to tier because it hasn't synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
| 0x80072ee2 | -2147012894 | WININET_E_TIMEOUT | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | | 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. | | 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to tier due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | | 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. | | 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database isn't running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. |
-| 0x80C80285 | -2134375803 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file can't be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
-| 0x80C86050 | -2134351792 | ECS_E_REPLICA_NOT_READY_FOR_TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. |
+| 0x80C80285 | -2134375803 | ECS_E_GHOSTING_SKIPPED_BY_<br>CUSTOM_EXCLUSION_LIST | The file can't be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE<br>\SOFTWARE\Microsoft\Azure\StorageSync |
+| 0x80C86050 | -2134351792 | ECS_E_REPLICA_NOT_READY_FOR_<br>TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. |
## How to troubleshoot files that fail to be recalled If files fail to be recalled:
If files fail to be recalled:
| 0x80c80037 | -2134376393 | ECS_E_SYNC_SHARE_NOT_FOUND | The file failed to recall because the server endpoint was deleted. | To resolve this issue, see [Tiered files aren't accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). | | 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to recall due to an access denied error. This issue can occur if the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | To resolve this issue, add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. | | 0x80c86002 | -2134351870 | ECS_E_AZURE_RESOURCE_NOT_FOUND | The file failed to recall because it's not accessible in the Azure file share. | To resolve this issue, verify the file exists in the Azure file share. If the file exists in the Azure file share, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md#supported-versions). |
-| 0x80c8305f | -2134364065 | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED | The file failed to recall due to authorization failure to the storage account. | To resolve this issue, verify [Azure File Sync has access to the storage account](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#troubleshoot-rbac). |
+| 0x80c8305f | -2134364065 | ECS_E_EXTERNAL_STORAGE_ACCOUNT_<br>AUTHORIZATION_FAILED | The file failed to recall due to authorization failure to the storage account. | To resolve this issue, verify [Azure File Sync has access to the storage account](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#troubleshoot-rbac). |
| 0x80c86030 | -2134351824 | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | The file failed to recall because the Azure file share isn't accessible. | Verify the file share exists and is accessible. If the file share was deleted and recreated, perform the steps documented in the [Sync failed because the Azure file share was deleted and recreated](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#-2134375810) section to delete and recreate the sync group. | | 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to recall due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | | 0x8007000e | -2147024882 | ERROR_OUTOFMEMORY | The file failed to recall due to insufficient memory. | If the error persists, investigate which application or kernel-mode driver is causing the low memory condition. | | 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to recall due to insufficient disk space. | To resolve this issue, free up space on the volume by moving files to a different volume, increase the size of the volume, or force files to tier by using the `Invoke-StorageSyncCloudTiering` cmdlet. | | 0x80072f8f | -2147012721 | WININET_E_DECODING_FAILED | The file failed to recall because the server was unable to decode the response from the Azure File Sync service. | This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration. |
-| 0x80090352 | -2146892974 | SEC_E_ISSUING_CA_UNTRUSTED | The file failed to recall because your organization is using a TLS terminating proxy or a malicious entity is intercepting the traffic between your server and the Azure File Sync service. | If you are certain this is expected (because your organization is using a TLS terminating proxy), follow the steps documented for error [CERT_E_UNTRUSTEDROOT](file-sync-troubleshoot-sync-errors.md#-2146762487) to resolve this issue. |
+| 0x80090352 | -2146892974 | SEC_E_ISSUING_CA_UNTRUSTED | The file failed to recall because your organization is using a TLS terminating proxy or a malicious entity is intercepting the traffic between your server and the Azure File Sync service. | If you're certain this is expected (because your organization is using a TLS terminating proxy), follow the steps documented for error [CERT_E_UNTRUSTEDROOT](file-sync-troubleshoot-sync-errors.md#-2146762487) to resolve this issue. |
| 0x80c86047 | -2134351801 | ECS_E_AZURE_SHARE_SNAPSHOT_NOT_FOUND | The file failed to recall because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. | ## Tiered files are not accessible on the server after deleting a server endpoint
$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
*Option 1: Delete the orphaned tiered files*
-This option deletes the orphaned tiered files on the Windows Server but requires removing the server endpoint if it exists due to recreation after 30 days or is connected to a different sync group. File conflicts will occur if files are updated on the Windows Server or Azure file share before the server endpoint is recreated.
+This option deletes the orphaned tiered files on the Windows Server but requires removing the server endpoint if it exists due to re-creation after 30 days or is connected to a different sync group. File conflicts will occur if files are updated on the Windows Server or Azure file share before the server endpoint is recreated.
1. Back up the Azure file share and server endpoint location. 2. Remove the server endpoint in the sync group (if it exists) by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
$orphanFilesRemoved = Remove-StorageSyncOrphanedTieredFiles -Path <folder path c
$orphanFilesRemoved.OrphanedTieredFiles > DeletedOrphanFiles.txt ``` **Notes** -- Tiered files modified on the server that are not synced to the Azure file share will be deleted.
+- Tiered files modified on the server that aren't synced to the Azure file share will be deleted.
- Tiered files that are accessible (not orphan) won't be deleted. - Non-tiered files will remain on the server.
synapse-analytics Develop Materialized View Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-materialized-view-performance-tuning.md
ORDER BY t_s_secyear.customer_id
OPTION ( LABEL = 'Query04-af359846-253-3'); ```
-Check the query's [estimated execution plan](/sql/relational-databases/performance/display-the-estimated-execution-plan.md). There are 18 shuffles and 17 joins operations, which take more time to execute.
+Check the query's [estimated execution plan](/sql/relational-databases/performance/display-the-estimated-execution-plan). There are 18 shuffles and 17 joins operations, which take more time to execute.
Now, let's create one materialized view for each of the three sub-SELECT statements.
With materialized views, the same query runs much faster without any code change
For more development tips, see [Synapse SQL development overview](develop-overview.md). - [Monitor your Azure Synapse Analytics dedicated SQL pool workload using DMVs](../sql-data-warehouse/sql-data-warehouse-manage-monitor.md). -- [View estimated execution plan](/sql/relational-databases/performance/display-the-estimated-execution-plan.md)
+- [View estimated execution plan](/sql/relational-databases/performance/display-the-estimated-execution-plan)
virtual-desktop App Attach Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-azure-portal.md
To publish the apps:
2. Select the application group you want to publish the apps to. >[!NOTE]
- >MSIX applications can be delivered with MSIX app attach to both remote app and desktop app groups. When a MSIX package is assigned to a remote app group and desktop app group from the same host pool the desktop app group will be displayed in the feed.
+ >MSIX applications can be delivered with MSIX app attach to both remote app and desktop application groups. When a MSIX package is assigned to a RemoteApp application group and Desktop application group from the same host pool the Desktop application group will be displayed in the feed.
-3. Once you're in the app group, select the **Applications** tab. The **Applications** grid will display all existing apps within the app group.
+3. Once you're in the application group, select the **Applications** tab. The **Applications** grid will display all existing apps within the application group.
4. Select **+ Add** to open the **Add application** tab.
To publish the apps:
> ![A screenshot of the user selecting + Add to open the add application tab](media/select-add.png) 5. For **Application source**, choose the source for your application.
- - If you're using a Desktop app group, choose **MSIX package**.
+ - If you're using a Desktop application group, choose **MSIX package**.
> [!div class="mx-imgBorder"] > ![A screenshot of a customer selecting MSIX package from the application source drop-down menu. MSIX package is highlighted in red.](media/select-source.png)
- - If you're using a remote app group, choose one of the following options:
+ - If you're using a RemoteApp application group, choose one of the following options:
- Start menu - App path
To publish the apps:
- For **Description**, enter a short description of the app package.
- - If you're using a remote app group, you can also configure these options:
+ - If you're using a RemoteApp application group, you can also configure these options:
- **Icon path** - **Icon index** 6. When you're done, select **Save**.
-## Assign a user to an app group
+## Assign a user to an application group
-After assigning MSIX apps to an app group, you'll need to grant users access to them. You can assign access by adding users or user groups to an app group with published MSIX applications. Follow the instructions in [Manage app groups with the Azure portal](manage-app-groups.md) to assign your users to an app group.
+After assigning MSIX apps to an application group, you'll need to grant users access to them. You can assign access by adding users or user groups to an application group with published MSIX applications. Follow the instructions in [Manage application groups with the Azure portal](manage-app-groups.md) to assign your users to an application group.
## Change MSIX package state
virtual-desktop App Attach Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-powershell.md
Here's what you need to configure MSIX app attach:
- A functioning Azure Virtual Desktop deployment. To learn how to deploy Azure Virtual Desktop (classic), see [Create a tenant in Azure Virtual Desktop](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md). To learn how to deploy Azure Virtual Desktop with Azure Resource Manager integration, see [Create a host pool with the Azure portal](./create-host-pools-azure-marketplace.md). - A Azure Virtual Desktop host pool with at least one active session host.-- A Desktop remote app group.
+- A Desktop or RemoteApp application group.
- The MSIX packaging tool. - An MSIX-packaged application expanded into an MSIX image that's uploaded into a file share. - A file share in your Azure Virtual Desktop deployment where the MSIX package will be stored.
To remove the package, run this cmdlet:
Remove-AzWvdMsixPackage -FullName $obj.PackageFullName -HostPoolName $hp -ResourceGroupName $rg ```
-## Publish MSIX apps to an app group
+## Publish MSIX apps to an application group
-You can only follow the instructions in this section if you've finished following the instructions in the previous sections. If you have a host pool with an active session host, at least one Desktop app group, and have added an MSIX package to the host pool, you're ready to go.
+You can only follow the instructions in this section if you've finished following the instructions in the previous sections. If you have a host pool with an active session host, at least one Desktop application group, and have added an MSIX package to the host pool, you're ready to go.
-To publish an app from the MSIX package to an app group, you'll need to find its name, then use that name in the publishing cmdlet.
+To publish an app from the MSIX package to an application group, you'll need to find its name, then use that name in the publishing cmdlet.
To publish an app:
-Run this cmdlet to list all available app groups:
+Run this cmdlet to list all available application groups:
```powershell Get-AzWvdApplicationGroup -ResourceGroupName $rg -SubscriptionId $subId ```
-When you've found the name of the app group you want to publish apps to, use its name in this cmdlet:
+When you've found the name of the application group you want to publish apps to, use its name in this cmdlet:
```powershell $grName = "<AppGroupName>"
$grName = "<AppGroupName>"
Finally, you'll need to publish the app. -- To publish MSIX application to a desktop app group, run this cmdlet:
+- To publish MSIX application to a desktop application group, run this cmdlet:
```powershell New-AzWvdApplication -ResourceGroupName $rg -SubscriptionId $subId -Name PowerBi -ApplicationType MsixApplication -ApplicationGroupName $grName -MsixPackageFamilyName $obj.PackageFamilyName -CommandLineSetting 0 ``` -- To publish the app to a remote app group, run this cmdlet instead:
+- To publish the app to a RemoteApp application group, run this cmdlet instead:
```powershell New-AzWvdApplication -ResourceGroupName $rg -SubscriptionId $subId -Name PowerBi -ApplicationType MsixApplication -ApplicationGroupName $grName -MsixPackageFamilyName $obj.PackageFamilyName -CommandLineSetting 0 -MsixPackageApplicationId $obj.PackageApplication.AppId ``` >[!NOTE]
->If a user is assigned to both a remote app group and a desktop app group in the same host pool, when the user connects to their remote desktop, they will see MSIX apps from both groups.
+>If a user is assigned to both a RemoteApp application group and a desktop application group in the same host pool, when the user connects to their remote desktop, they will see MSIX apps from both groups.
## Next steps
virtual-desktop Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/automatic-migration.md
Before you use the migration module, make sure you have the following things rea
- PowerShell or PowerShell ISE to run the scripts you'll see in this article. The Microsoft.RdInfra.RDPowershell module doesn't work in PowerShell Core. >[!IMPORTANT]
->Migration only creates service objects in the US geography. If you try to migrate your service objects to another geography, it won't work. Also, if you have more than 500 app groups in your Azure Virtual Desktop (classic) deployment, you won't be able to migrate. You'll only be able to migrate if you rebuild your environment to reduce the number of app groups within your Azure Active Directory (Azure AD) tenant.
+>Migration only creates service objects in the US geography. If you try to migrate your service objects to another geography, it won't work. Also, if you have more than 500 application groups in your Azure Virtual Desktop (classic) deployment, you won't be able to migrate. You'll only be able to migrate if you rebuild your environment to reduce the number of application groups within your Azure Active Directory (Azure AD) tenant.
## Prepare your PowerShell environment
To migrate your Azure virtual Desktop (classic) resources to Azure Resource Mana
You'll also need to specify a user assignment mode for the existing user assignments:
- - Use **Copy** to copy all user assignments from your old app groups to Azure Resource Manager application groups. Users will be able to see feeds for both versions of their clients.
- - Use **None** if you don't want to change the user assignments. Later, you can assign users or user groups to app groups with the Azure portal, PowerShell, or API. Users will only be able to see feeds using the Azure Virtual Desktop (classic) clients.
+ - Use **Copy** to copy all user assignments from your old application groups to Azure Resource Manager application groups. Users will be able to see feeds for both versions of their clients.
+ - Use **None** if you don't want to change the user assignments. Later, you can assign users or user groups to application groups with the Azure portal, PowerShell, or API. Users will only be able to see feeds using the Azure Virtual Desktop (classic) clients.
You can only copy 2,000 user assignments per subscription, so your limit will depend on how many assignments are already in your subscription. The module calculates the limit based on how many assignments you already have. If you don't have enough assignments to copy, you'll get an error message that says "Insufficient role assignment quota to copy user assignments. Rerun command without the -CopyUserAssignments switch to migrate."
To migrate your Azure virtual Desktop (classic) resources to Azure Resource Mana
- A resource group called "Tenantname," which contains your workspace.
- - A resource group called "Tenantname_originalHostPoolName," which contains the host pool and desktop app groups.
+ - A resource group called "Tenantname_originalHostPoolName," which contains the host pool and desktop application groups.
- - Any users you published to the newly created app groups.
+ - Any users you published to the newly created application groups.
- Virtual machines will be available in both existing and new host pools to avoid user downtime during the migration process. This lets users connect to the same user session. Since these new Azure service objects are Azure Resource Manager objects, the module can't set Role-based Access Control (RBAC) permissions or diagnostic settings on them. Therefore, you'll need to update the RBAC permissions and settings for these objects manually.
- Once the module validates the initial user connections, you can also publish the app group to more users or user groups, if you'd like.
+ Once the module validates the initial user connections, you can also publish the application group to more users or user groups, if you'd like.
>[!NOTE]
- >After migration, if you move app groups to a different resource group after assigning permissions to users, it will remove all RBAC roles. You'll need to reassign users RBAC permissions all over again.
+ >After migration, if you move application groups to a different resource group after assigning permissions to users, it will remove all RBAC roles. You'll need to reassign users RBAC permissions all over again.
-4. If you want to delete all Azure Virtual Desktop (classic) service objects, run **Complete-RdsHostPoolMigration** to finish the migration process. This cmdlet will delete all Azure Virtual Desktop (classic) objects, leaving only the new Azure objects. Users will only be able to see the feed for the newly created app groups on their clients. Once this command is done, you can safely delete the Azure Virtual Desktop (classic) tenant to finish the process.
+4. If you want to delete all Azure Virtual Desktop (classic) service objects, run **Complete-RdsHostPoolMigration** to finish the migration process. This cmdlet will delete all Azure Virtual Desktop (classic) objects, leaving only the new Azure objects. Users will only be able to see the feed for the newly created application groups on their clients. Once this command is done, you can safely delete the Azure Virtual Desktop (classic) tenant to finish the process.
For example:
To migrate your Azure virtual Desktop (classic) resources to Azure Resource Mana
Complete-RdsHostPoolMigration -Tenant Contoso -HostPool Office -Location EastUS ```
- This will delete all service objects created by Azure Virtual Desktop (classic). You will be left with just the new Azure objects and users will only be able to see the feed for the newly created app groups on their clients. Once you are done finalizing your migration, you need to explicitly delete the tenant in Azure Virtual Desktop (classic).
+ This will delete all service objects created by Azure Virtual Desktop (classic). You will be left with just the new Azure objects and users will only be able to see the feed for the newly created application groups on their clients. Once you are done finalizing your migration, you need to explicitly delete the tenant in Azure Virtual Desktop (classic).
5. If you've changed your mind about migrating and want to revert the process, run the **Revert-RdsHostPoolMigration** cmdlet.
virtual-desktop Azure Ad Joined Session Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-ad-joined-session-hosts.md
You can deploy Azure AD-joined VMs directly from the Azure portal when you [crea
### Assign user access to host pools
-After you've created your host pool, you must assign users access to their resources. To grant access to resources, add each user to the app group. Follow the instructions in [Manage app groups](manage-app-groups.md) to assign user access to apps and desktops. We recommend that you use user groups instead of individual users wherever possible.
+After you've created your host pool, you must assign users access to their resources. To grant access to resources, add each user to the application group. Follow the instructions in [Manage application groups](manage-app-groups.md) to assign user access to apps and desktops. We recommend that you use user groups instead of individual users wherever possible.
For Azure AD-joined VMs, you'll need to do two extra things on top of the requirements for Active Directory or Azure Active Directory Domain Services-based deployments: - Assign your users the **Virtual Machine User Login** role so they can sign in to the VMs. - Assign administrators who need local administrative privileges the **Virtual Machine Administrator Login** role.
-To grant users access to Azure AD-joined VMs, you must [configure role assignments for the VM](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#configure-role-assignments-for-the-vm). You can assign the **Virtual Machine User Login** or **Virtual Machine Administrator Login** role either on the VMs, the resource group containing the VMs, or the subscription. We recommend assigning the Virtual Machine User Login role to the same user group you used for the app group at the resource group level to make it apply to all the VMs in the host pool.
+To grant users access to Azure AD-joined VMs, you must [configure role assignments for the VM](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#configure-role-assignments-for-the-vm). You can assign the **Virtual Machine User Login** or **Virtual Machine Administrator Login** role either on the VMs, the resource group containing the VMs, or the subscription. We recommend assigning the Virtual Machine User Login role to the same user group you used for the application group at the resource group level to make it apply to all the VMs in the host pool.
## Access Azure AD-joined VMs
virtual-desktop Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/cli-powershell.md
Some PowerShell cmdlets require you to provide the object ID of Azure Virtual De
Now that you know how to use Azure CLI and Azure PowerShell with Azure Virtual Desktop, here are some articles that use them: - [Create an Azure Virtual Desktop host pool with PowerShell or the Azure CLI](create-host-pools-powershell.md)-- [Manage app groups using PowerShell or the Azure CLI](manage-app-groups-powershell.md)
+- [Manage application groups using PowerShell or the Azure CLI](manage-app-groups-powershell.md)
virtual-desktop Configure Host Pool Personal Desktop Assignment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-personal-desktop-assignment-type.md
To directly assign a user to a session host in the Azure portal:
4. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**. 5. Select the host pool you want to assign users to. 6. Next, go to the menu on the left side of the window and select **Application groups**.
-7. Select the name of the app group you want to assign users to, then select **Assignments** in the menu on the left side of the window.
-8. Select **+ Add**, then select the users or user groups you want to assign to this app group.
+7. Select the name of the application group you want to assign users to, then select **Assignments** in the menu on the left side of the window.
+8. Select **+ Add**, then select the users or user groups you want to assign to this application group.
9. Select **Assign VM** in the Information bar to assign a session host to a user. 10. Select the session host you want to assign to the user, then select **Assign**. You can also select **Assignment** > **Assign user**. 11. Select the user you want to assign the session host to from the list of available users.
virtual-desktop Customize Feed For Virtual Desktop Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-feed-for-virtual-desktop-users.md
This article assumes you've already downloaded and installed the Azure Virtual D
You can change the display name for a remote desktop for your users by setting its session host friendly name. By default, the session host friendly name is empty, so users only see the app name. You can set the session host friendly name using REST API. >[!NOTE]
->The following instructions only apply to personal desktops, not pooled desktops. Also, personal host pools only allow and support desktop app groups.
+>The following instructions only apply to personal desktops, not pooled desktops. Also, personal host pools only allow and support desktop application groups.
To add or change a session host's friendly name, use the [Session Host - Update REST API](/rest/api/desktopvirtualization/session-hosts/update?tabs=HTTP) and update the *properties.friendlyName* parameter with a REST API request.
To add or change a session host's friendly name, use the [Session Host - Update
You can change the display name for a published RemoteApp by setting the friendly name. By default, the friendly name is the same as the name of the RemoteApp program.
-To retrieve a list of published RemoteApps for an app group, run the following PowerShell cmdlet:
+To retrieve a list of published RemoteApps for an application group, run the following PowerShell cmdlet:
```powershell Get-AzWvdApplication -ResourceGroupName <resourcegroupname> -ApplicationGroupName <appgroupname>
FriendlyName : WordUpdate
## Customize the display name for a Remote Desktop
-You can change the display name for a published remote desktop by setting a friendly name. If you manually created a host pool and desktop app group through PowerShell, the default friendly name is "Session Desktop." If you created a host pool and desktop app group through the GitHub Azure Resource Manager template or the Azure Marketplace offering, the default friendly name is the same as the host pool name.
+You can change the display name for a published remote desktop by setting a friendly name. If you manually created a host pool and desktop application group through PowerShell, the default friendly name is "Session Desktop." If you created a host pool and desktop application group through the GitHub Azure Resource Manager template or the Azure Marketplace offering, the default friendly name is the same as the host pool name.
To retrieve the remote desktop resource, run the following PowerShell cmdlet:
You can change the display name for a published remote desktop by setting a frie
3. Under Services, select **Azure Virtual Desktop**.
-4. On the Azure Virtual Desktop page, select **Application groups** on the left side of the screen, then select the name of the app group you want to edit. (For example, if you want to edit the display name of the desktop app group, select the app group named **Desktop**.)
+4. On the Azure Virtual Desktop page, select **Application groups** on the left side of the screen, then select the name of the application group you want to edit. (For example, if you want to edit the display name of the desktop application group, select the application group named **Desktop**.)
5. Select **Applications** in the menu on the left side of the screen.
virtual-desktop Customize Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-rdp-properties.md
RDP files have the following properties by default:
|EnableCredssp|Enabled| >[!NOTE]
->- Multi-monitor mode is only enabled for Desktop app groups and will be ignored for RemoteApp app groups.
+>- Multi-monitor mode is only enabled for Desktop application groups and will be ignored for RemoteApp application groups.
>- All default RDP file properties are exposed in the Azure Portal. >- A null CustomRdpProperty field will apply all default RDP properties to your host pool. An empty CustomRdpProperty field won't apply any default RDP properties to your host pool.
virtual-desktop Delegated Access Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/delegated-access-virtual-desktop.md
Azure Virtual Desktop delegated access supports the following values for each el
* Custom roles * Scope * Host pools
- * App groups
+ * Application groups
* Workspaces ## PowerShell cmdlets for role assignments Before you start, make sure to follow the instructions in [Set up the PowerShell module](powershell-module.md) to set up the Azure Virtual Desktop PowerShell module if you haven't already.
-Azure Virtual Desktop uses Azure role-based access control (Azure RBAC) while publishing app groups to users or user groups. The Desktop Virtualization User role is assigned to the user or user group and the scope is the app group. This role gives the user special data access on the app group.
+Azure Virtual Desktop uses Azure role-based access control (Azure RBAC) while publishing application groups to users or user groups. The Desktop Virtualization User role is assigned to the user or user group and the scope is the application group. This role gives the user special data access on the application group.
-Run the following cmdlet to add Azure Active Directory users to an app group:
+Run the following cmdlet to add Azure Active Directory users to an application group:
```powershell New-AzRoleAssignment -SignInName <userupn> -RoleDefinitionName "Desktop Virtualization User" -ResourceName <appgroupname> -ResourceGroupName <resourcegroupname> -ResourceType 'Microsoft.DesktopVirtualization/applicationGroups' ```
-Run the following cmdlet to add Azure Active Directory user group to an app group:
+Run the following cmdlet to add Azure Active Directory user group to an application group:
```powershell New-AzRoleAssignment -ObjectId <usergroupobjectid> -RoleDefinitionName "Desktop Virtualization User" -ResourceName <appgroupname> -ResourceGroupName <resourcegroupname> -ResourceType 'Microsoft.DesktopVirtualization/applicationGroups'
virtual-desktop Delete Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/delete-host-pool.md
# Delete a host pool
-All host pools created in Azure Virtual Desktop are attached to session hosts and app groups. To delete a host pool, you need to delete its associated app groups and session hosts. Deleting an app group is fairly simple, but deleting a session host is more complicated. When you delete a session host, you need to make sure it doesn't have any active user sessions. All user sessions on the session host should be logged off to prevent users from losing data.
+All host pools created in Azure Virtual Desktop are attached to session hosts and application groups. To delete a host pool, you need to delete its associated application groups and session hosts. Deleting an application group is fairly simple, but deleting a session host is more complicated. When you delete a session host, you need to make sure it doesn't have any active user sessions. All user sessions on the session host should be logged off to prevent users from losing data.
### [Portal](#tab/azure-portal)
To delete a host pool in the Azure portal:
5. Select all application groups in the host pool you're going to delete, then select **Remove**.
-6. Once you've removed the app groups, go to the menu on the left side of the page and select **Overview**.
+6. Once you've removed the application groups, go to the menu on the left side of the page and select **Overview**.
7. Select **Remove**.
To delete a host pool in the Azure portal:
### [Azure PowerShell](#tab/azure-powershell)
-To delete a host pool using PowerShell, you first need to delete all app groups in the host pool. To delete all app groups, run the following PowerShell cmdlet:
+To delete a host pool using PowerShell, you first need to delete all application groups in the host pool. To delete all application groups, run the following PowerShell cmdlet:
```powershell Remove-AzWvdApplicationGroup -Name <appgroupname> -ResourceGroupName <resourcegroupname>
This cmdlet removes all existing user sessions on the host pool's session host.
### [Azure CLI](#tab/azure-cli)
-To delete a host pool using the Azure CLI, you first need to delete all app groups in the host pool.
+To delete a host pool using the Azure CLI, you first need to delete all application groups in the host pool.
-To delete all app groups, use the [az desktopvirtualization applicationgroup delete](/cli/azure/desktopvirtualization/applicationgroup#az-desktopvirtualization-applicationgroup-delete) command:
+To delete all application groups, use the [az desktopvirtualization applicationgroup delete](/cli/azure/desktopvirtualization/applicationgroup#az-desktopvirtualization-applicationgroup-delete) command:
```azurecli az desktopvirtualization applicationgroup delete --name "MyApplicationGroup" --resource-group "MyResourceGroup"
virtual-desktop Diagnostics Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/diagnostics-log-analytics.md
To set up Log Analytics for a new object:
1. Sign in to the Azure portal and go to **Azure Virtual Desktop**.
-2. Navigate to the object (such as a host pool, app group, or workspace) that you want to capture logs and events for.
+2. Navigate to the object (such as a host pool, application group, or workspace) that you want to capture logs and events for.
3. Select **Diagnostic settings** in the menu on the left side of the screen.
To set up Log Analytics for a new object:
The options shown in the Diagnostic Settings page will vary depending on what kind of object you're editing.
- For example, when you're enabling diagnostics for an app group, you'll see options to configure checkpoints, errors, and management. For workspaces, these categories configure a feed to track when users subscribe to the list of apps. To learn more about diagnostic settings see [Create diagnostic setting to collect resource logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md).
+ For example, when you're enabling diagnostics for an application group, you'll see options to configure checkpoints, errors, and management. For workspaces, these categories configure a feed to track when users subscribe to the list of apps. To learn more about diagnostic settings see [Create diagnostic setting to collect resource logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md).
>[!IMPORTANT] >Remember to enable diagnostics for each Azure Resource Manager object that you want to monitor. Data will be available for activities after diagnostics has been enabled. It might take a few hours after first set-up.
virtual-desktop Disaster Recovery Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery-concepts.md
To prevent system outages or downtime, every system and component in your Azure
## Azure Virtual Desktop infrastructure
-In order to figure out which areas to make fault-tolerant, we first need to know who's responsible for maintaining each area. You can divide responsibility in the Azure Virtual Desktop service into two areas: Microsoft-managed and customer-managed. Metadata like the host pools, app groups, and workspaces is controlled by Microsoft. The metadata is always available and doesn't require extra setup by the customer to replicate host pool data or configurations. We've designed the gateway infrastructure that connects people to their session hosts to be a global, highly resilient service managed by Microsoft. Meanwhile, customer-managed areas involve the virtual machines (VMs) used in Azure Virtual Desktop and the settings and configurations unique to the customer's deployment. The following table gives a clearer idea of which areas are managed by which party.
+In order to figure out which areas to make fault-tolerant, we first need to know who's responsible for maintaining each area. You can divide responsibility in the Azure Virtual Desktop service into two areas: Microsoft-managed and customer-managed. Metadata like the host pools, application groups, and workspaces is controlled by Microsoft. The metadata is always available and doesn't require extra setup by the customer to replicate host pool data or configurations. We've designed the gateway infrastructure that connects people to their session hosts to be a global, highly resilient service managed by Microsoft. Meanwhile, customer-managed areas involve the virtual machines (VMs) used in Azure Virtual Desktop and the settings and configurations unique to the customer's deployment. The following table gives a clearer idea of which areas are managed by which party.
| Managed by Microsoft | Managed by customer | |-|-|
virtual-desktop Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery.md
Identifying which method works best for your organization is the first thing you
First, you'll need to replicate your VMs to the secondary location. Your options for doing so depend on how your VMs are configured: -- You can configure replication for all your VMs in both pooled and personal host pools with Azure Site Recovery. For more information about how this process works, see [Replicate Azure VMs to another Azure region](../site-recovery/azure-to-azure-how-to-enable-replication.md). However, if you have pooled host pools that you built from the same image and don't have any personal user data stored locally, you can choose not to replicate them. Instead, you have the option to build the VMs ahead of time and keep them powered off. You can also choose to only provision new VMs in the secondary region while a disaster is happening. If you choose these methods, you'll only need to set up one host pool and its related app groups and workspaces.-- You can create a new host pool in the failover region while keeping all resources in your failover location turned off. For this method, you'd need to set up new app groups and workspaces in the failover region. You can then use an Azure Site Recovery plan to turn on host pools.-- You can create a host pool that's populated by VMs built in both the primary and failover regions while keeping the VMs in the failover region turned off. In this case, you only need to set up one host pool and its related app groups and workspaces. You can use an Azure Site Recovery plan to power on host pools with this method.
+- You can configure replication for all your VMs in both pooled and personal host pools with Azure Site Recovery. For more information about how this process works, see [Replicate Azure VMs to another Azure region](../site-recovery/azure-to-azure-how-to-enable-replication.md). However, if you have pooled host pools that you built from the same image and don't have any personal user data stored locally, you can choose not to replicate them. Instead, you have the option to build the VMs ahead of time and keep them powered off. You can also choose to only provision new VMs in the secondary region while a disaster is happening. If you choose these methods, you'll only need to set up one host pool and its related application groups and workspaces.
+- You can create a new host pool in the failover region while keeping all resources in your failover location turned off. For this method, you'd need to set up new application groups and workspaces in the failover region. You can then use an Azure Site Recovery plan to turn on host pools.
+- You can create a host pool that's populated by VMs built in both the primary and failover regions while keeping the VMs in the failover region turned off. In this case, you only need to set up one host pool and its related application groups and workspaces. You can use an Azure Site Recovery plan to power on host pools with this method.
We recommend you use [Azure Site Recovery](../site-recovery/site-recovery-overview.md) to manage replicating VMs to other Azure locations, as described in [Azure-to-Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). We especially recommend using Azure Site Recovery for personal host pools because, true to their name, personal host pools tend to have something personal about them for their users. Azure Site Recovery supports both [server-based and client-based SKUs](../site-recovery/azure-to-azure-support-matrix.md#replicated-machine-operating-systems).
virtual-desktop Fslogix Profile Container Configure Azure Files Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-profile-container-configure-azure-files-active-directory.md
You have now finished the setting up Profile Container. If you are installing Pr
## Validate profile creation
-Once you've installed and configured Profile Container, you can test your deployment by signing in with a user account that's been assigned an app group or desktop on the host pool.
+Once you've installed and configured Profile Container, you can test your deployment by signing in with a user account that's been assigned an application group or desktop on the host pool.
If the user has signed in before, they'll have an existing local profile that they'll use during this session. Either delete the local profile first, or create a new user account to use for tests.
virtual-desktop Getting Started Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/getting-started-feature.md
# Deploy Azure Virtual Desktop with the getting started feature
-You can quickly deploy Azure Virtual Desktop with the *getting started* feature in the Azure portal. This can be used in smaller scenarios with a few users and apps, or you can use it to evaluate Azure Virtual Desktop in larger enterprise scenarios. It works with existing Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS) deployments, or it can deploy Azure AD DS for you. Once you've finished, a user will be able to sign in to a full virtual desktop session, consisting of one host pool (with one or more session hosts), one app group, and one user. To learn about the terminology used in Azure Virtual Desktop, see [Azure Virtual Desktop terminology](environment-setup.md).
+You can quickly deploy Azure Virtual Desktop with the *getting started* feature in the Azure portal. This can be used in smaller scenarios with a few users and apps, or you can use it to evaluate Azure Virtual Desktop in larger enterprise scenarios. It works with existing Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS) deployments, or it can deploy Azure AD DS for you. Once you've finished, a user will be able to sign in to a full virtual desktop session, consisting of one host pool (with one or more session hosts), one application group, and one user. To learn about the terminology used in Azure Virtual Desktop, see [Azure Virtual Desktop terminology](environment-setup.md).
Joining session hosts to Azure Active Directory with the getting started feature is not supported. If you want to want to join session hosts to Azure Active Directory, follow the [tutorial to create a host pool](create-host-pools-azure-marketplace.md).
To delete the resource groups:
## Next steps
-If you want to publish apps as well as the full virtual desktop, see the tutorial to [Manage app groups with the Azure portal](manage-app-groups.md).
+If you want to publish apps as well as the full virtual desktop, see the tutorial to [Manage application groups with the Azure portal](manage-app-groups.md).
If you'd like to learn how to deploy Azure Virtual Desktop in a more in-depth way, with less permission required, or programmatically, check out our series of tutorials, starting with [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md).
virtual-desktop Manage App Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manage-app-groups-powershell.md
Title: Manage app groups for Azure Virtual Desktop - Azure
-description: How to manage Azure Virtual Desktop app groups with PowerShell or the Azure CLI.
+ Title: Manage application groups for Azure Virtual Desktop - Azure
+description: How to manage Azure Virtual Desktop application groups with PowerShell or the Azure CLI.
Last updated 07/23/2021
-# Manage app groups using PowerShell or the Azure CLI
+# Manage application groups using PowerShell or the Azure CLI
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/manage-app-groups-2019.md).
-The default app group created for a new Azure Virtual Desktop host pool also publishes the full desktop. In addition, you can create one or more RemoteApp application groups for the host pool. Follow this tutorial to create a RemoteApp app group and publish individual **Start** menu apps.
+The default application group created for a new Azure Virtual Desktop host pool also publishes the full desktop. In addition, you can create one or more RemoteApp application groups for the host pool. Follow this tutorial to create a RemoteApp application group and publish individual **Start** menu apps.
In this tutorial, learn how to:
This article assumes you've already set up your environment for the Azure CLI, a
To create a RemoteApp group with PowerShell:
-1. Run the following PowerShell cmdlet to create a new empty RemoteApp app group.
+1. Run the following PowerShell cmdlet to create a new empty RemoteApp application group.
```powershell New-AzWvdApplicationGroup -Name <appgroupname> -ResourceGroupName <resourcegroupname> -ApplicationGroupType "RemoteApp" -HostPoolArmPath '/subscriptions/SubscriptionId/resourcegroups/ResourceGroupName/providers/Microsoft.DesktopVirtualization/hostPools/HostPoolName'-Location <azureregion> ```
-2. (Optional) To verify that the app group was created, you can run the following cmdlet to see a list of all app groups for the host pool.
+2. (Optional) To verify that the application group was created, you can run the following cmdlet to see a list of all application groups for the host pool.
```powershell Get-AzWvdApplicationGroup -Name <appgroupname> -ResourceGroupName <resourcegroupname>
To create a RemoteApp group with PowerShell:
Get-AzWvdApplication -GroupName <appgroupname> -ResourceGroupName <resourcegroupname> ```
-7. Repeat steps 1ΓÇô5 for each application that you want to publish for this app group.
-8. Run the following cmdlet to grant users access to the RemoteApp programs in the app group.
+7. Repeat steps 1ΓÇô5 for each application that you want to publish for this application group.
+8. Run the following cmdlet to grant users access to the RemoteApp programs in the application group.
```powershell New-AzRoleAssignment -SignInName <userupn> -RoleDefinitionName "Desktop Virtualization User" -ResourceName <appgroupname> -ResourceGroupName <resourcegroupname> -ResourceType 'Microsoft.DesktopVirtualization/applicationGroups'
To create a RemoteApp group with PowerShell:
To create a RemoteApp group with the Azure CLI:
-1. Use the [az desktopvirtualization applicationgroup create](/cli/azure/desktopvirtualization##az-desktopvirtualization-applicationgroup-create) command to create a new remote application group:
+1. Use the [az desktopvirtualization applicationgroup create](/cli/azure/desktopvirtualization##az-desktopvirtualization-applicationgroup-create) command to create a new RemoteApp application group:
```azurecli az desktopvirtualization applicationgroup create --name "MyApplicationGroup" \
To create a RemoteApp group with the Azure CLI:
--description "Description of this application group" ```
-2. (Optional) To verify that the app group was created, you can run the following command to see a list of all app groups for the host pool.
+2. (Optional) To verify that the application group was created, you can run the following command to see a list of all application groups for the host pool.
```azurecli az desktopvirtualization applicationgroup list \
virtual-desktop Manage App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manage-app-groups.md
Title: Manage app groups for Azure Virtual Desktop portal - Azure
-description: How to manage Azure Virtual Desktop app groups with the Azure portal.
+ Title: Manage application groups for Azure Virtual Desktop portal - Azure
+description: How to manage Azure Virtual Desktop application groups with the Azure portal.
Last updated 01/31/2022
-# Manage app groups with the Azure portal
+# Manage application groups with the Azure portal
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/manage-app-groups-2019.md).
-The default app group created for a new Azure Virtual Desktop host pool also publishes the full desktop. In addition, you can create one or more RemoteApp application groups for the host pool. Follow this tutorial to create a RemoteApp app group and publish individual Start menu apps.
+The default application group created for a new Azure Virtual Desktop host pool also publishes the full desktop. In addition, you can create one or more RemoteApp application groups for the host pool. Follow this tutorial to create a RemoteApp application group and publish individual Start menu apps.
>[!NOTE] >You can dynamically attach MSIX apps to user sessions or add your app packages to a custom virtual machine (VM) image to publish your organization's apps. Learn more at [How to host custom apps with Azure Virtual Desktop](./remote-app-streaming/custom-apps.md).
If you've already created a host pool and session host VMs using the Azure porta
- Select **Host pools** in the menu on the left side of the screen, select the name of the host pool, select **Application groups** from the menu on the left side, then select **+ Add**. In this case, the host pool will already be selected on the Basics tab.
-4. On the **Basics** tab, select the **Subscription** and **Resource group** you want to create the app group for. You can also choose to create a new resource group instead of selecting an existing one.
+4. On the **Basics** tab, select the **Subscription** and **Resource group** you want to create the application group for. You can also choose to create a new resource group instead of selecting an existing one.
5. Select the **Host pool** that will be associated with the application group from the drop-down menu. >[!NOTE]
- >You must select the host pool associated with the application group. App groups have apps or desktops that are served from a session host and session hosts are part of host pools. The app group needs to be associated with a host pool during creation.
+ >You must select the host pool associated with the application group. Application groups have apps or desktops that are served from a session host and session hosts are part of host pools. The application group needs to be associated with a host pool during creation.
> [!div class="mx-imgBorder"] > ![A screenshot of the Basics tab in the Azure portal.](media/basics-tab.png)
If you've already created a host pool and session host VMs using the Azure porta
7. Select **Next: Assignments >** tab.
-8. To assign individual users or user groups to the app group, select **+Add Azure AD users or user groups**.
+8. To assign individual users or user groups to the application group, select **+Add Azure AD users or user groups**.
9. Select the users you want to have access to the apps. You can select single or multiple users and user groups.
If you've already created a host pool and session host VMs using the Azure porta
15. Next, select **Next: Workspace >**.
-16. If you want to register the app group to a workspace, select **Yes** for **Register application group**. If you'd rather register the app group at a later time, select **No**.
+16. If you want to register the application group to a workspace, select **Yes** for **Register application group**. If you'd rather register the application group at a later time, select **No**.
-17. If you select **Yes**, you can select an existing workspace to register your app group to.
+17. If you select **Yes**, you can select an existing workspace to register your application group to.
>[!NOTE]
- >You can only register the app group to workspaces created in the same location as the host pool. Also. if you've previously registered another app group from the same host pool as your new app group to a workspace, it will be selected and you can't edit it. All app groups from a host pool must be registered to the same workspace.
+ >You can only register the application group to workspaces created in the same location as the host pool. Also. if you've previously registered another application group from the same host pool as your new application group to a workspace, it will be selected and you can't edit it. All application groups from a host pool must be registered to the same workspace.
> [!div class="mx-imgBorder"] > ![A screenshot of the register application group page for an already existing workspace. The host pool is preselected.](media/register-existing.png)
If you've already created a host pool and session host VMs using the Azure porta
19. When you're done, select **Review + create**.
-20. Wait a bit for the validation process to complete. When it's done, select **Create** to deploy your app group.
+20. Wait a bit for the validation process to complete. When it's done, select **Create** to deploy your application group.
The deployment process will do the following things for you: -- Create the RemoteApp app group.-- Add your selected apps to the app group.-- Publish the app group published to users and user groups you selected.-- Register the app group, if you chose to do so.
+- Create the RemoteApp application group.
+- Add your selected apps to the application group.
+- Publish the application group published to users and user groups you selected.
+- Register the application group, if you chose to do so.
- Create a link to an Azure Resource Manager template based on your configuration that you can download and save for later. >[!IMPORTANT]
->You can only create 500 application groups for each Azure Active Directory tenant. We added this limit because of service limitations for retrieving feeds for our users. This limit doesn't apply to app groups created in Azure Virtual Desktop (classic).
+>You can only create 500 application groups for each Azure Active Directory tenant. We added this limit because of service limitations for retrieving feeds for our users. This limit doesn't apply to application groups created in Azure Virtual Desktop (classic).
## Edit or remove an app
-To edit or remove an app from an app group:
+To edit or remove an app from an application group:
1. Sign in to the [Azure portal](https://portal.azure.com/).
To edit or remove an app from an app group:
3. You can either add an application group directly or from an existing host pool by choosing one of the following options:
- - To add a new application group directly, select **Application groups** in the menu on the left side of the page, then select the app group you want to edit.
- - To edit an app group in an existing host pool, select **Host pools** in the menu on the left side of the screen, select the name of the host pool, then select **Application groups** in the menu that appears on the left side of the screen, and then select the app group you want to edit.
+ - To add a new application group directly, select **Application groups** in the menu on the left side of the page, then select the application group you want to edit.
+ - To edit an application group in an existing host pool, select **Host pools** in the menu on the left side of the screen, select the name of the host pool, then select **Application groups** in the menu that appears on the left side of the screen, and then select the application group you want to edit.
4. Select **Applications** in the menu on the left side of the page.
To edit or remove an app from an app group:
## Next steps
-In this tutorial, you learned how to create an app group, populate it with RemoteApp programs, and assign users to the app group. To learn how to create a validation host pool, see the following tutorial. You can use a validation host pool to monitor service updates before rolling them out to your production environment.
+In this tutorial, you learned how to create an application group, populate it with RemoteApp programs, and assign users to the application group. To learn how to create a validation host pool, see the following tutorial. You can use a validation host pool to monitor service updates before rolling them out to your production environment.
> [!div class="nextstepaction"] > [Create a host pool to validate service updates](./create-validation-host-pool.md)
virtual-desktop Manual Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manual-migration.md
To migrate manually from Azure Virtual Desktop (classic) to Azure Virtual Deskto
1. Follow the instructions in [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md) to create all high-level objects with the Azure portal. 2. If you want to bring over the virtual machines you're already using, follow the instructions in [Register the virtual machines to the Azure Virtual Desktop host pool](create-host-pools-powershell.md#register-the-virtual-machines-to-the-azure-virtual-desktop-host-pool) to manually register them to the new host pool you created in step 1.
-3. Create new RemoteApp app groups.
-4. Publish users or user groups to the new desktop and RemoteApp app groups.
+3. Create new RemoteApp application groups.
+4. Publish users or user groups to the new desktop and RemoteApp application groups.
5. Update your Conditional Access policy to allow the new objects by following the instructions in [Set up multi-factor authentication](set-up-mfa.md).
-To prevent downtime, you should first register your existing session hosts to the Azure Resource Manager-integrated host pools in small groups at a time. After that, slowly bring your users over to the new Azure Resource Manager-integrated app groups.
+To prevent downtime, you should first register your existing session hosts to the Azure Resource Manager-integrated host pools in small groups at a time. After that, slowly bring your users over to the new Azure Resource Manager-integrated application groups.
## Next steps
virtual-desktop Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/move-resources.md
In this article, we'll tell you how to move Azure Virtual Desktop resources betw
When you move Azure Virtual Desktop resources between regions, these are some things you should keep in mind: -- When exporting resources, you must move them as a set. All resources associated with a specific host pool have to stay together. A host pool and its associated app groups need to be in the same region.
+- When exporting resources, you must move them as a set. All resources associated with a specific host pool have to stay together. A host pool and its associated application groups need to be in the same region.
-- Workspaces and their associated app groups also need to be in the same region.
+- Workspaces and their associated application groups also need to be in the same region.
- Scaling plans and the host pools they are assigned to also need to be in the same region.
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/overview.md
With Azure Virtual Desktop, you can set up a scalable and flexible environment:
You can deploy and manage virtual desktops: -- Use the Azure portal, Azure CLI, PowerShell and REST API to configure the host pools, create app groups, assign users, and publish resources.-- Publish full desktop or individual remote apps from a single host pool, create individual app groups for different sets of users, or even assign users to multiple app groups to reduce the number of images.
+- Use the Azure portal, Azure CLI, PowerShell and REST API to configure the host pools, create application groups, assign users, and publish resources.
+- Publish full desktop or individual remote apps from a single host pool, create individual application groups for different sets of users, or even assign users to multiple application groups to reduce the number of images.
- As you manage your environment, use built-in delegated access to assign roles and collect diagnostics to understand various configuration or user errors. - Use the new Diagnostics service to troubleshoot errors. - Only manage the image and virtual machines, not the infrastructure. You don't need to personally manage the Remote Desktop roles like you do with Remote Desktop Services, just the virtual machines in your Azure subscription.
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
This article will show you how to set up Private Link for Azure Virtual Desktop
In order to use Private Link in your Azure Virtual Desktop deployment, you'll need the following things: - An Azure account with an active subscription.-- An Azure Virtual Desktop deployment with service objects, such as host pools, app groups, and [workspaces](environment-setup.md#workspaces).
+- An Azure Virtual Desktop deployment with service objects, such as host pools, application groups, and [workspaces](environment-setup.md#workspaces).
- The [required permissions to use Private Link](../private-link/rbac-permissions.md). >[!IMPORTANT]
virtual-desktop Msix App Attach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/msix-app-attach.md
You'll need the following things to use MSIX app attach in Azure Virtual Desktop
- An MSIX share, which is the network location where you store MSIX images - At least one healthy and active session host in the host pool you'll use - If your MSIX packaged application has a private certificate, that certificate must be available on all session hosts in the host pool-- Azure Virtual Desktop configuration for MSIX app attach (user assignment, association of MSIX application with app group, adding MSIX image to host pool)
+- Azure Virtual Desktop configuration for MSIX app attach (user assignment, association of MSIX application with application group, adding MSIX image to host pool)
## Create an MSIX package from an existing installer
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/overview.md
You can set up your deployment manually by following these tutorials:
1. [Create a host pool with the Azure portal](../create-host-pools-azure-marketplace.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
-2. [Manage app groups](../manage-app-groups.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
+2. [Manage application groups](../manage-app-groups.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
3. [Create a host pool to validate service updates](../create-validation-host-pool.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
virtual-desktop Sandbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/sandbox.md
To publish Windows Sandbox to your host pool:
1. Select **Application groups**, then select the name of the application group in the host pool you want to publish Windows Sandbox to.
-1. Once you're in the application group, select the **Applications** tab. The Applications grid will display all existing apps within the app group.
+4. Once you're in the application group, select the **Applications** tab. The Applications grid will display all existing apps within the application group.
1. Select **+ Add** to open the **Add application** tab.
virtual-desktop Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/terminology.md
Title: Azure Virtual Desktop terminology - Azure
-description: Learn about the basic elements of Azure Virtual Desktop, like host pools, app groups, and workspaces.
+description: Learn about the basic elements of Azure Virtual Desktop, like host pools, application groups, and workspaces.
Last updated 02/03/2023
Azure Virtual Desktop is a service that gives users easy and secure access to th
## Host pools
-A host pool is a collection of Azure virtual machines that register to Azure Virtual Desktop as session hosts when you run the Azure Virtual Desktop agent. All session host virtual machines in a host pool should be sourced from the same image for a consistent user experience. You control the resources published to users through app groups.
+A host pool is a collection of Azure virtual machines that register to Azure Virtual Desktop as session hosts when you run the Azure Virtual Desktop agent. All session host virtual machines in a host pool should be sourced from the same image for a consistent user experience. You control the resources published to users through application groups.
A host pool can be one of two types:
The following table goes into more detail about the differences between each typ
|Windows Updates|Updated with Windows Updates, [System Center Configuration Manager (SCCM)](configure-automatic-updates.md), or other software distribution configuration tools.|Updated by redeploying session hosts from updated images instead of traditional updates.| |User data| Each user only ever uses one session host, so they can store their user profile data on the operating system (OS) disk of the VM. | Users can connect to different session hosts every time they connect, so they should store their user profile data in [FSLogix](/fslogix/configure-profile-container-tutorial). |
-## App groups
+## Application groups
-An app group is a logical grouping of applications installed on session hosts in the host pool.
+An application group is a logical grouping of applications installed on session hosts in the host pool.
-An app group can be one of two types:
+An application group can be one of two types:
-- RemoteApp, where users access the RemoteApps you individually select and publish to the app group. Available with pooled host pools only.
+- RemoteApp, where users access the RemoteApps you individually select and publish to the application group. Available with pooled host pools only.
- Desktop, where users access the full desktop. Available with pooled or personal host pools.
-Pooled host pools have a preferred app group type that dictates whether users see RemoteApp or Desktop apps in their feed if both resources have been published to the same user. By default, Azure Virtual Desktop automatically creates a Desktop app group with the friendly name **Default Desktop** whenever you create a host pool and sets the host pool's preferred app group type to **Desktop**. You can remove the Desktop app group at any time. If you want your users to only see RemoteApps in their feed, you should set the **preferred application group type** value to **RemoteApp**. If you want your users to only see session desktops in their feed, you should set the **preferred application group type** value to **Desktop**. You can't create another Desktop app group in a host pool while a Desktop app group exists.
+Pooled host pools have a preferred application group type that dictates whether users see RemoteApp or Desktop apps in their feed if both resources have been published to the same user. By default, Azure Virtual Desktop automatically creates a Desktop application group with the friendly name **Default Desktop** whenever you create a host pool and sets the host pool's preferred application group type to **Desktop**. You can remove the Desktop application group at any time. If you want your users to only see RemoteApps in their feed, you should set the **preferred application group type** value to **RemoteApp**. If you want your users to only see session desktops in their feed, you should set the **preferred application group type** value to **Desktop**. You can't create another Desktop application group in a host pool while a Desktop application group exists.
-To publish resources to users, you must assign them to app groups. When assigning users to app groups, consider the following things:
+To publish resources to users, you must assign them to application groups. When assigning users to application groups, consider the following things:
-- We don't support assigning both the RemoteApp and desktop app groups in a single host pool to the same user. Doing so will cause a single user to have two user sessions in a single host pool. Users aren't supposed to have two active user sessions at the same time, as this can cause the following things to happen:
+- We don't support assigning both the RemoteApp and desktop application groups in a single host pool to the same user. Doing so will cause a single user to have two user sessions in a single host pool. Users aren't supposed to have two active user sessions at the same time, as this can cause the following things to happen:
- The session hosts become overloaded - Users get stuck when trying to login - Connections won't work - The screen turns black - The application crashes - Other negative effects on end-user experience and session performance-- A user can be assigned to multiple app groups within the same host pool, and their feed will be an accumulation of both app groups.-- Personal host pools only allow and support Desktop app groups.
+- A user can be assigned to multiple application groups within the same host pool, and their feed will be an accumulation of both application groups.
+- Personal host pools only allow and support Desktop application groups.
>[!NOTE] >If your host poolΓÇÖs *preferred application group type* is set to **Undefined**, that means you havenΓÇÖt set the value yet. You must finish configuring your host pool by setting its *preferred application group type* before you start using it to prevent app incompatibility and session host overload issues.
A workspace is a logical grouping of application groups in Azure Virtual Desktop
## End users
-After you've assigned users to their app groups, they can connect to an Azure Virtual Desktop deployment with any of the Azure Virtual Desktop clients.
+After you've assigned users to their application groups, they can connect to an Azure Virtual Desktop deployment with any of the Azure Virtual Desktop clients.
## User sessions
virtual-desktop Troubleshoot Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-management-issues.md
The following table lists error messages that appear due to management-related i
|Failed to change session host drain mode |Couldn't change drain mode on the VM. Check the VM status. If the VM isn't available, you can't change drain mode.| |Failed to disconnect user sessions |Couldn't disconnect the user from the VM. Check the VM status. If the VM isn't available, you can't disconnect the user session. If the VM is available, check the user session status to see if it's disconnected. | |Failed to log off all user(s) within the session host |Could not sign users out of the VM. Check the VM status. If unavailable, users can't be signed out. Check user session status to see if they're already signed out. You can force sign out with PowerShell. |
-|Failed to unassign user from application group|Could not unpublish an app group for a user. Check to see if user is available on Azure AD. Check to see if the user is part of a user group that the app group is published to. |
+|Failed to unassign user from application group|Could not unpublish an application group for a user. Check to see if user is available on Azure AD. Check to see if the user is part of a user group that the application group is published to. |
|There was an error retrieving the available locations |Check location of VM used in the create host pool wizard. If image is not available in that location, add image in that location or choose a different VM location. |
-## Error: Can't add user assignments to an app group
+## Error: Can't add user assignments to an application group
-After assigning a user to an app group, the Azure portal displays a warning that says "Session Ending" or "Experiencing Authentication Issues - Extension Microsoft_Azure_WVD." The assignment page then doesn't load, and after that, pages stop loading throughout the Azure portal (for example, Azure Monitor, Log Analytics, Service Health, and so on).
+After assigning a user to an application group, the Azure portal displays a warning that says "Session Ending" or "Experiencing Authentication Issues - Extension Microsoft_Azure_WVD." The assignment page then doesn't load, and after that, pages stop loading throughout the Azure portal (for example, Azure Monitor, Log Analytics, Service Health, and so on).
This issue usually appears because there's a problem with the conditional access policy. The Azure portal is trying to obtain a token for Microsoft Graph, which is dependent on SharePoint Online. The customer has a conditional access policy called "Microsoft Office 365 Data Storage Terms of Use" that requires users to accept the terms of use to access data storage. However, they haven't signed in yet, so the Azure portal can't get the token.
virtual-desktop Troubleshoot Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-service-connection.md
A user can start Remote Desktop clients and is able to authenticate, however the
This error usually appears after a user moved their subscription from one Azure Active Directory tenant to another. As a result, the service loses track of their user assignments, since those are still tied to the old Azure Active Directory tenant.
-To resolve this, all you need to do is reassign the users to their app groups.
+To resolve this, all you need to do is reassign the users to their application groups.
This could also happen if a CSP Provider created the subscription and then transferred to the customer. To resolve this re-register the Resource Provider.
virtual-desktop Troubleshoot Set Up Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-issues.md
If your operation goes over the quota limit, you can do one of the following thi
- Open the link you see in the statusMessage field in a browser to submit a request to increase the quota for your Azure subscription for the specified VM SKU.
-### Error: Can't see user assignments in app groups.
+### Error: Can't see user assignments in application groups.
**Cause**: This error usually happens after you've moved the subscription from one Azure Active Directory tenant to another. If your old assignments are still tied to the previous Azure Active Directory tenant, the Azure portal will lose track of them.
-**Fix**: You'll need to reassign users to app groups.
+**Fix**: You'll need to reassign users to application groups.
### I don't see the Azure region I want to use when selecting the location for my service objects
virtual-desktop Troubleshoot Set Up Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-overview.md
Use the following table to identify and resolve issues you may encounter when se
| Managing Azure Virtual Desktop configuration tied to host pools and application groups (app groups) | See [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md), or [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select the appropriate problem type.| | Deploying and manage FSLogix Profile Containers | See [Troubleshooting guide for FSLogix products](/fslogix/fslogix-trouble-shooting-ht/) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, select **FSLogix** for the problem type, then select the appropriate problem subtype. | | Remote desktop clients malfunction on start | See [Troubleshoot the Remote Desktop client](troubleshoot-client-windows.md) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop clients** for the problem type. <br> <br> If it's a network issue, your users need to contact their network administrator. |
-| Connected but no feed | Troubleshoot using the [User connects but nothing is displayed (no feed)](troubleshoot-service-connection.md#user-connects-but-nothing-is-displayed-no-feed) section of [Azure Virtual Desktop service connections](troubleshoot-service-connection.md). <br> <br> If your users have been assigned to an app group, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop Clients** for the problem type. |
+| Connected but no feed | Troubleshoot using the [User connects but nothing is displayed (no feed)](troubleshoot-service-connection.md#user-connects-but-nothing-is-displayed-no-feed) section of [Azure Virtual Desktop service connections](troubleshoot-service-connection.md). <br> <br> If your users have been assigned to an application group, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop Clients** for the problem type. |
| Feed discovery problems due to the network | Your users need to contact their network administrator. | | Connecting clients | See [Azure Virtual Desktop service connections](troubleshoot-service-connection.md) and if that doesn't solve your issue, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). | | Responsiveness of remote applications or desktop | If issues are tied to a specific application or product, contact the team responsible for that product. |
virtual-desktop Configure Vm Gpu 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/configure-vm-gpu-2019.md
Follow the instructions in this article to create a GPU optimized Azure virtual
Azure offers a number of [GPU optimized virtual machine sizes](../../virtual-machines/sizes-gpu.md). The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density.
-## Create a host pool, provision your virtual machine, and configure an app group
+## Create a host pool, provision your virtual machine, and configure an application group
Create a new host pool using a VM of the size you selected. For instructions, see [Tutorial: Create a host pool with Azure Marketplace](../create-host-pools-azure-marketplace.md).
Azure Virtual Desktop supports GPU-accelerated rendering and encoding in the fol
* Windows 10 version 1511 or newer * Windows Server 2016 or newer
-You must also configure an app group, or use the default desktop app group (named "Desktop Application Group") that's automatically created when you create a new host pool. For instructions, see [Tutorial: Manage app groups for Azure Virtual Desktop](../manage-app-groups.md).
+You must also configure an application group, or use the default desktop application group (named "Desktop Application Group") that's automatically created when you create a new host pool. For instructions, see [Tutorial: Manage application groups for Azure Virtual Desktop](../manage-app-groups.md).
## Install supported graphics drivers in your virtual machine
virtual-desktop Create Host Pools Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-arm-template.md
>[!IMPORTANT] >This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects.
-Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop tenant environments. Each host pool can contain an app group that users can interact with as they would on a physical desktop.
+Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop tenant environments. Each host pool can contain an application group that users can interact with as they would on a physical desktop.
Follow this section's instructions to create a host pool for a Azure Virtual Desktop tenant with an Azure Resource Manager template provided by Microsoft. This article will tell you how to create a host pool in Azure Virtual Desktop, create a resource group with VMs in an Azure subscription, join those VMs to the AD domain, and register the VMs with Azure Virtual Desktop.
virtual-desktop Create Host Pools Azure Marketplace 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-azure-marketplace-2019.md
In this tutorial, you'll learn how to create a host pool within a Azure Virtual Desktop tenant by using a Microsoft Azure Marketplace offering.
-Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop tenant environments. Each host pool can contain an app group that users can interact with as they would on a physical desktop.
+Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop tenant environments. Each host pool can contain an application group that users can interact with as they would on a physical desktop.
The tasks in this tutorial include:
Here are the current supported clients:
You've made a host pool and assigned users to access its desktop. You can populate your host pool with RemoteApp programs. To learn more about how to manage apps in Azure Virtual Desktop, see this tutorial: > [!div class="nextstepaction"]
-> [Manage app groups tutorial](manage-app-groups-2019.md)
+> [Manage application groups tutorial](manage-app-groups-2019.md)
virtual-desktop Create Host Pools Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-powershell-2019.md
>[!IMPORTANT] >This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../create-host-pools-powershell.md).
-Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop tenant environments. Each host pool can contain an app group that users can interact with as they would on a physical desktop.
+Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop tenant environments. Each host pool can contain an application group that users can interact with as they would on a physical desktop.
## Use your PowerShell client to create a host pool
Run the next cmdlet to create a registration token to authorize a session host t
New-RdsRegistrationInfo -TenantName <tenantname> -HostPoolName <hostpoolname> -ExpirationHours <number of hours> | Select-Object -ExpandProperty Token | Out-File -FilePath <PathToRegFile> ```
-After that, run this cmdlet to add Azure Active Directory users to the default desktop app group for the host pool.
+After that, run this cmdlet to add Azure Active Directory users to the default desktop application group for the host pool.
```powershell Add-RdsAppGroupUser -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName "Desktop Application Group" -UserPrincipalName <userupn> ```
-The **Add-RdsAppGroupUser** cmdlet doesn't support adding security groups and only adds one user at a time to the app group. If you want to add multiple users to the app group, rerun the cmdlet with the appropriate user principal names.
+The **Add-RdsAppGroupUser** cmdlet doesn't support adding security groups and only adds one user at a time to the application group. If you want to add multiple users to the application group, rerun the cmdlet with the appropriate user principal names.
Run the following cmdlet to export the registration token to a variable, which you will use later in [Register the virtual machines to the Azure Virtual Desktop host pool](#register-the-virtual-machines-to-the-azure-virtual-desktop-host-pool).
To register the Azure Virtual Desktop agents, do the following on each virtual m
## Next steps
-Now that you've made a host pool, you can populate it with RemoteApps. To learn more about how to manage apps in Azure Virtual Desktop, see the Manage app groups tutorial.
+Now that you've made a host pool, you can populate it with RemoteApps. To learn more about how to manage apps in Azure Virtual Desktop, see the Manage application groups tutorial.
> [!div class="nextstepaction"]
-> [Manage app groups tutorial](../manage-app-groups.md)
+> [Manage application groups tutorial](../manage-app-groups.md)
virtual-desktop Customize Feed Virtual Desktop Users 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-feed-virtual-desktop-users-2019.md
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"
You can change the display name for a published RemoteApp by setting the friendly name. By default, the friendly name is the same as the name of the RemoteApp program.
-To retrieve a list of published RemoteApps for an app group, run the following PowerShell cmdlet:
+To retrieve a list of published RemoteApps for an application group, run the following PowerShell cmdlet:
```powershell Get-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname>
Set-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroup
## Customize the display name for a Remote Desktop
-You can change the display name for a published remote desktop by setting a friendly name. If you manually created a host pool and desktop app group through PowerShell, the default friendly name is "Session Desktop." If you created a host pool and desktop app group through the GitHub Azure Resource Manager template or the Azure Marketplace offering, the default friendly name is the same as the host pool name.
+You can change the display name for a published remote desktop by setting a friendly name. If you manually created a host pool and desktop application group through PowerShell, the default friendly name is "Session Desktop." If you created a host pool and desktop application group through the GitHub Azure Resource Manager template or the Azure Marketplace offering, the default friendly name is the same as the host pool name.
To retrieve the remote desktop resource, run the following PowerShell cmdlet:
virtual-desktop Data Locations 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/data-locations-2019.md
Azure Virtual Desktop is currently available for all geographical locations. Ini
>Microsoft doesn't control or limit the regions where you or your users can access your user and app-specific data. >[!IMPORTANT]
->Azure Virtual Desktop stores global metadata information like tenant names, host pool names, app group names, and user principal names in a datacenter located in the United States. The stored metadata is encrypted at rest, and geo-redundant mirrors are maintained within the United States. All customer data, such as app settings and user data, resides in the location the customer chooses and isn't managed by the service.
+>Azure Virtual Desktop stores global metadata information like tenant names, host pool names, application group names, and user principal names in a datacenter located in the United States. The stored metadata is encrypted at rest, and geo-redundant mirrors are maintained within the United States. All customer data, such as app settings and user data, resides in the location the customer chooses and isn't managed by the service.
Service metadata is replicated in the United States for disaster recovery purposes.
virtual-desktop Delegated Access Virtual Desktop 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/delegated-access-virtual-desktop-2019.md
Azure Virtual Desktop delegated access supports the following values for each el
* Tenant groups * Tenants * Host pools
- * App groups
+ * Application groups
## Built-in roles
You can run the following cmdlets to create, view, and remove role assignments:
You can modify the basic three cmdlets with the following parameters: * **AadTenantId**: specifies the Azure Active Directory tenant ID from which the service principal is a member.
-* **AppGroupName**: name of the Remote Desktop app group.
+* **AppGroupName**: name of the Remote Desktop application group.
* **Diagnostics**: indicates the diagnostics scope. (Must be paired with either the **Infrastructure** or **Tenant** parameters.) * **HostPoolName**: name of the Remote Desktop host pool. * **Infrastructure**: indicates the infrastructure scope.
virtual-desktop Diagnostics Log Analytics 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/diagnostics-log-analytics-2019.md
Azure Virtual Desktop offers a diagnostics feature that allows the administrator
- Feed subscription activities: when a user tries to connect to their feed through Microsoft Remote Desktop applications. - Connection activities: when a user tries to connect to a desktop or RemoteApp through Microsoft Remote Desktop applications.-- Management activities: when an administrator performs management operations on the system, such as creating host pools, assigning users to app groups, and creating role assignments.
+- Management activities: when an administrator performs management operations on the system, such as creating host pools, assigning users to application groups, and creating role assignments.
Connections that don't reach Azure Virtual Desktop won't show up in diagnostics results because the diagnostics role service itself is part of Azure Virtual Desktop. Azure Virtual Desktop connection issues can happen when the user is experiencing network connectivity issues.
virtual-desktop Diagnostics Role Service 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/diagnostics-role-service-2019.md
Azure Virtual Desktop offers a diagnostics feature that allows the administrator
* Feed subscription activities: the end-user triggers these activities whenever they try to connect to their feed through Microsoft Remote Desktop applications. * Connection activities: the end-user triggers these activities whenever they try to connect to a desktop or RemoteApp through Microsoft Remote Desktop applications.
-* Management activities: the administrator triggers these activities whenever they perform management operations on the system, such as creating host pools, assigning users to app groups, and creating role assignments.
+* Management activities: the administrator triggers these activities whenever they perform management operations on the system, such as creating host pools, assigning users to application groups, and creating role assignments.
Connections that don't reach Azure Virtual Desktop won't show up in diagnostics results because the diagnostics role service itself is part of Azure Virtual Desktop. Azure Virtual Desktop connection issues can happen when the end-user is experiencing network connectivity issues.
The following table lists common errors your admins might run into.
|1000|TenantNotFound|The tenant name you entered doesn't match any existing tenants. Review the tenant name for typos and try again.| |1006|TenantCannotBeRemovedHasSessionHostPools|You can't delete a tenant as long it contains objects. Delete the session host pools first, then try again.| |2000|HostPoolNotFound|The host pool name you entered doesn't match any existing host pools. Review the host pool name for typos and try again.|
-|2005|HostPoolCannotBeRemovedHasApplicationGroups|You can't delete a host pool as long as it contains objects. Remove all app groups in the host pool first.|
+|2005|HostPoolCannotBeRemovedHasApplicationGroups|You can't delete a host pool as long as it contains objects. Remove all application groups in the host pool first.|
|2004|HostPoolCannotBeRemovedHasSessionHosts|Remove all sessions hosts first before deleting the session host pool.| |5001|SessionHostNotFound|The session host you queried might be offline. Check the host pool's status.| |5008|SessionHostUserSessionsExist |You must sign out all users on the session host before executing your intended management activity.|
-|6000|AppGroupNotFound|The app group name you entered doesn't match any existing app groups. Review the app group name for typos and try again.|
+|6000|AppGroupNotFound|The application group name you entered doesn't match any existing application groups. Review the application group name for typos and try again.|
|6022|RemoteAppNotFound|The RemoteApp name you entered doesn't match any RemoteApps. Review RemoteApp name for typos and try again.| |6010|PublishedItemsExist|The name of the resource you're trying to publish is the same as a resource that already exists. Change the resource name and try again.| |7002|NameNotValidWhiteSpace|Don't use white space in the name.|
virtual-desktop Environment Setup 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/environment-setup-2019.md
A host pool is a collection of Azure virtual machines that register to Azure Vir
A host pool can be one of two types: - Personal, where each session host is assigned to individual users.-- Pooled, where session hosts can accept connections from any user authorized to an app group within the host pool.
+- Pooled, where session hosts can accept connections from any user authorized to an application group within the host pool.
-You can set additional properties on the host pool to change its load-balancing behavior, how many sessions each session host can take, and what the user can do to session hosts in the host pool while signed in to their Azure Virtual Desktop sessions. You control the resources published to users through app groups.
+You can set additional properties on the host pool to change its load-balancing behavior, how many sessions each session host can take, and what the user can do to session hosts in the host pool while signed in to their Azure Virtual Desktop sessions. You control the resources published to users through application groups.
-## App groups
+## Application groups
-An app group is a logical grouping of applications installed on session hosts in the host pool. An app group can be one of two types:
+An application group is a logical grouping of applications installed on session hosts in the host pool. An application group can be one of two types:
-- RemoteApp, where users access the RemoteApps you individually select and publish to the app group
+- RemoteApp, where users access the RemoteApps you individually select and publish to the application group
- Desktop, where users access the full desktop
-By default, a desktop app group (named "Desktop Application Group") is automatically created whenever you create a host pool. You can remove this app group at any time. However, you can't create another desktop app group in the host pool while a desktop app group exists. To publish RemoteApps, you must create a RemoteApp app group. You can create multiple RemoteApp app groups to accommodate different worker scenarios. Different RemoteApp app groups can also contain overlapping RemoteApps.
+By default, a desktop application group (named "Desktop Application Group") is automatically created whenever you create a host pool. You can remove this application group at any time. However, you can't create another desktop application group in the host pool while a desktop application group exists. To publish RemoteApps, you must create a RemoteApp application group. You can create multiple RemoteApp application groups to accommodate different worker scenarios. Different RemoteApp application groups can also contain overlapping RemoteApps.
-To publish resources to users, you must assign them to app groups. When assigning users to app groups, consider the following things:
+To publish resources to users, you must assign them to application groups. When assigning users to application groups, consider the following things:
-- A user can't be assigned to both a desktop app group and a RemoteApp app group in the same host pool.-- A user can be assigned to multiple app groups within the same host pool, and their feed will be an accumulation of both app groups.
+- A user can't be assigned to both a desktop application group and a RemoteApp application group in the same host pool.
+- A user can be assigned to multiple application groups within the same host pool, and their feed will be an accumulation of both application groups.
## Tenant groups
-In Azure Virtual Desktop, the Azure Virtual Desktop tenant is where most of the setup and configuration happens. The Azure Virtual Desktop tenant contains the host pools, app groups, and app group user assignments. However, there may be certain situations where you need to manage multiple Azure Virtual Desktop tenants at once, particularly if you're a Cloud Service Provider (CSP) or a hosting partner. In these situations, you can use a custom Azure Virtual Desktop tenant group to place each of the customers' Azure Virtual Desktop tenants and centrally manage access. However, if you're only managing a single Azure Virtual Desktop tenant, the tenant group concept doesn't apply and you can continue to operate and manage your tenant that exists in the default tenant group.
+In Azure Virtual Desktop, the Azure Virtual Desktop tenant is where most of the setup and configuration happens. The Azure Virtual Desktop tenant contains the host pools, application groups, and application group user assignments. However, there may be certain situations where you need to manage multiple Azure Virtual Desktop tenants at once, particularly if you're a Cloud Service Provider (CSP) or a hosting partner. In these situations, you can use a custom Azure Virtual Desktop tenant group to place each of the customers' Azure Virtual Desktop tenants and centrally manage access. However, if you're only managing a single Azure Virtual Desktop tenant, the tenant group concept doesn't apply and you can continue to operate and manage your tenant that exists in the default tenant group.
## End users
-After you've assigned users to their app groups, they can connect to a Azure Virtual Desktop deployment with any of the Azure Virtual Desktop clients.
+After you've assigned users to their application groups, they can connect to a Azure Virtual Desktop deployment with any of the Azure Virtual Desktop clients.
## Next steps
virtual-desktop Manage App Groups 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-app-groups-2019.md
Title: Manage app groups for Azure Virtual Desktop (classic) - Azure
+ Title: Manage application groups for Azure Virtual Desktop (classic) - Azure
description: Learn how to set up Azure Virtual Desktop (classic) tenants in Azure Active Directory (Azure AD).
Last updated 08/16/2021
-# Tutorial: Manage app groups for Azure Virtual Desktop (classic)
+# Tutorial: Manage application groups for Azure Virtual Desktop (classic)
>[!IMPORTANT] >This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../manage-app-groups.md).
-The default app group created for a new Azure Virtual Desktop host pool also publishes the full desktop. In addition, you can create one or more RemoteApp application groups for the host pool. Follow this tutorial to create a RemoteApp app group and publish individual **Start** menu apps.
+The default application group created for a new Azure Virtual Desktop host pool also publishes the full desktop. In addition, you can create one or more RemoteApp application groups for the host pool. Follow this tutorial to create a RemoteApp application group and publish individual **Start** menu apps.
In this tutorial, learn how to:
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"
## Create a RemoteApp group
-1. Run the following PowerShell cmdlet to create a new empty RemoteApp app group.
+1. Run the following PowerShell cmdlet to create a new empty RemoteApp application group.
```powershell New-RdsAppGroup -TenantName <tenantname> -HostPoolName <hostpoolname> -Name <appgroupname> -ResourceType "RemoteApp" ```
-2. (Optional) To verify that the app group was created, you can run the following cmdlet to see a list of all app groups for the host pool.
+2. (Optional) To verify that the application group was created, you can run the following cmdlet to see a list of all application groups for the host pool.
```powershell Get-RdsAppGroup -TenantName <tenantname> -HostPoolName <hostpoolname>
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"
Get-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> ```
-7. Repeat steps 1ΓÇô5 for each application that you want to publish for this app group.
-8. Run the following cmdlet to grant users access to the RemoteApp programs in the app group.
+7. Repeat steps 1ΓÇô5 for each application that you want to publish for this application group.
+8. Run the following cmdlet to grant users access to the RemoteApp programs in the application group.
```powershell Add-RdsAppGroupUser -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> -UserPrincipalName <userupn>
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"
## Next steps
-In this tutorial, you learned how to create an app group, populate it with RemoteApp programs, and assign users to the app group. To learn how to create a validation host pool, see the following tutorial. You can use a validation host pool to monitor service updates before rolling them out to your production environment.
+In this tutorial, you learned how to create an application group, populate it with RemoteApp programs, and assign users to the application group. To learn how to create a validation host pool, see the following tutorial. You can use a validation host pool to monitor service updates before rolling them out to your production environment.
> [!div class="nextstepaction"] > [Create a host pool to validate service updates](create-validation-host-pool-2019.md)
virtual-desktop Tenant Setup Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md
> >Learn about how to create a host pool in Azure Virtual Desktop at [Tutorial: Create a host pool](../create-host-pools-azure-marketplace.md).
-Creating a tenant in Azure Virtual Desktop is the first step toward building your desktop virtualization solution. A tenant is a group of one or more host pools. Each host pool consists of multiple session hosts, running as virtual machines in Azure and registered to the Azure Virtual Desktop service. Each host pool also consists of one or more app groups that are used to publish remote desktop and remote application resources to users. With a tenant, you can build host pools, create app groups, assign users, and make connections through the service.
+Creating a tenant in Azure Virtual Desktop is the first step toward building your desktop virtualization solution. A tenant is a group of one or more host pools. Each host pool consists of multiple session hosts, running as virtual machines in Azure and registered to the Azure Virtual Desktop service. Each host pool also consists of one or more application groups that are used to publish remote desktop and remote application resources to users. With a tenant, you can build host pools, create application groups, assign users, and make connections through the service.
In this tutorial, learn how to:
virtual-desktop Troubleshoot Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-powershell-2019.md
This section lists PowerShell commands that are typically used while setting up
Add-RdsAppGroupUser -TenantName <TenantName> -HostPoolName <HostPoolName> -AppGroupName 'Desktop Application Group' -UserPrincipalName <UserName> ```
-**Cause:** The username used has been already assigned to an app group of a different type. Users can't be assigned to both a remote desktop and remote app group under the same session host pool.
+**Cause:** The username used has been already assigned to an application group of a different type. Users can't be assigned to both a remote desktop and RemoteApp application group under the same session host pool.
**Fix:** If user needs both remote apps and remote desktop, create different host pools or grant user access to the remote desktop, which will permit the use of any application on the session host VM.
virtual-desktop Troubleshoot Set Up Overview 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-overview-2019.md
Use the following table to identify and resolve issues you may encounter when se
| Managing Azure Virtual Desktop configuration tied to host pools and application groups (app groups) | See [Azure Virtual Desktop PowerShell](troubleshoot-powershell-2019.md), or [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select the appropriate problem type.| | Deploying and manage FSLogix Profile Containers | See [Troubleshooting guide for FSLogix products](/fslogix/fslogix-trouble-shooting-ht/) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, select **FSLogix** for the problem type, then select the appropriate problem subtype. | | Remote desktop clients malfunction on start | See [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop clients** for the problem type. <br> <br> If it's a network issue, your users need to contact their network administrator. |
-| Connected but no feed | Troubleshoot using the [User connects but nothing is displayed (no feed)](troubleshoot-service-connection-2019.md#user-connects-but-nothing-is-displayed-no-feed) section of [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md). <br> <br> If your users have been assigned to an app group, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop Clients** for the problem type. |
+| Connected but no feed | Troubleshoot using the [User connects but nothing is displayed (no feed)](troubleshoot-service-connection-2019.md#user-connects-but-nothing-is-displayed-no-feed) section of [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md). <br> <br> If your users have been assigned to an application group, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop Clients** for the problem type. |
| Feed discovery problems due to the network | Your users need to contact their network administrator. | | Connecting clients | See [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md) and if that doesn't solve your issue, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). | | Responsiveness of remote applications or desktop | If issues are tied to a specific application or product, contact the team responsible for that product. |
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Here's what changed in September 2021.
You can now use Azure Resource Manager templates for any update you want to apply to your session hosts after deployment. You can access this feature by selecting the **Virtual machines** tab while creating a host pool.
-You can also now set host pool, app group, and workspace diagnostic settings while creating host pools instead of afterwards. Configuring these settings during the host pool creation process also automatically sets up reporting data for Azure Virtual Desktop Insights.
+You can also now set host pool, application group, and workspace diagnostic settings while creating host pools instead of afterwards. Configuring these settings during the host pool creation process also automatically sets up reporting data for Azure Virtual Desktop Insights.
### Azure Active Directory domain join
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set-flex.md
Title: Associate a virtual machine scale set with flexible orchestration to a Ca
description: Learn how to associate a new virtual machine scale set with flexible orchestration mode to a Capacity Reservation group. -+ Last updated 11/22/2022
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set.md
Title: Associate a virtual machine scale set with uniform orchestration to a Cap
description: Learn how to associate a new or existing virtual machine scale with uniform orchestration set to a Capacity Reservation group. -+ Last updated 11/22/2022
virtual-machines Capacity Reservation Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-create.md
Title: Create a Capacity Reservation in Azure
description: Learn how to reserve Compute capacity in an Azure region or an Availability Zone by creating a Capacity Reservation. -+ Last updated 11/22/2022
virtual-machines Capacity Reservation Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-modify.md
Title: Modify a Capacity Reservation in Azure
description: Learn how to modify a Capacity Reservation. -+ Last updated 11/22/2022
virtual-machines Capacity Reservation Overallocate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overallocate.md
Title: Overallocating Capacity Reservation in Azure
description: Learn how overallocation works when it comes to Capacity Reservation. -+ Last updated 11/22/2022
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
Title: On-demand Capacity Reservation in Azure
description: Learn how to reserve compute capacity in an Azure region or an Availability Zone with Capacity Reservation. -+ Last updated 02/24/2023
virtual-machines Capacity Reservation Remove Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-remove-virtual-machine-scale-set.md
Title: Remove a Virtual Machine Scale Set association from a Capacity Reservatio
description: Learn how to remove a Virtual Machine Scale Set from a Capacity Reservation group. -+ Last updated 11/22/2022
virtual-machines Capacity Reservation Remove Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-remove-vm.md
Title: Remove a virtual machine association from a Capacity Reservation group
description: Learn how to remove a virtual machine from a Capacity Reservation group. -+ Last updated 11/22/2022
virtual-machines Dlsv5 Dldsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dlsv5-dldsv5-series.md
Title: 'Dlsv5 and Dldsv5 (preview)' #Required; page title is displayed in search results. 60 characters max.
-description: Specifications for the Dlsv5 and Dldsv5-series VMs. #Required; this appears in search as the short description
+ Title: Dlsv5 and Dldsv5
+description: Specifications for the Dlsv5 and Dldsv5-series VMs.
Last updated 02/16/2023
-# Dlsv5 and Dldsv5-series (preview)
+# Dlsv5 and Dldsv5-series
The Dlsv5 and Dldsv5-series Virtual Machines runs on the Intel&reg; Xeon&reg; Platinum 8370C (Ice Lake) processor in a [hyper threaded](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) configuration. This new processor features an all core turbo clock speed of 3.5 GHz with [Intel&reg; Turbo Boost Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Advanced-Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) and [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). The Dlsv5 and Dldsv5 VM series provides 2GiBs of RAM per vCPU and optimized for workloads that require less RAM per vCPU than standard VM sizes. Target workloads include web servers, gaming, video encoding, AI/ML, and batch processing. -
-> [!NOTE]
-> This feature is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of this feature may change prior to general availability (GA).
- ## Dlsv5-series Dlsv5-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor reaching an all core turbo clock speed of up to 3.5 GHz. These virtual machines offer up to 96 vCPU and 192 GiB of RAM. These VM sizes can reduce cost when running non-memory intensive applications.
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-and-updates.md
Within a VM, you can get notifications about upcoming maintenance by [using Sche
Most platform updates don't affect customer VMs. When a no-impact update isn't possible, Azure chooses the update mechanism that's least impactful to customer VMs.
-Most nonzero-impact maintenance pauses the VM for less than 10 seconds. In certain cases, Azure uses memory-preserving maintenance mechanisms. These mechanisms pause the VM, typically for about 30 seconds, and preserve the memory in RAM. The VM is then resumed, and its clock is automatically synchronized.
+When VM impacting maintenance is required it will almost always be completed through a VM pause for less than 10 seconds. In rare circumstances, no more than once every 18 months for general purpose VM sizes, Azure uses a mechanism that will pause the VM for about 30 seconds. After any pause operation the VM clock is automatically synchronized upon resume.
Memory-preserving maintenance works for more than 90 percent of Azure VMs. It doesn't work for G, M, N, and H series. Azure increasingly uses live-migration technologies and improves memory-preserving maintenance mechanisms to reduce the pause durations.
virtual-network-manager Concept Azure Policy Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-azure-policy-integration.md
Previously updated : 3/22/2023 Last updated : 04/14/2023
Azure Policy evaluates resources in Azure by comparing the properties of those r
Creating and implementing a policy in Azure Policy begins with creating a policy definition resource. Every policy definition has conditions under which it's enforced, and a defined effect that takes place if the conditions are met.
-With network groups, your policy definition includes your conditional expression for matching virtual networks meeting your criteria, and specifies the destination network group where any matching resources are placed. The `addToNetworkGroup` effect is used to accomplish this. Here's a sample of a policy rule definition with the `addToNetworkGroup` effect.
+With network groups, your policy definition includes your conditional expression for matching virtual networks meeting your criteria, and specifies the destination network group where any matching resources are placed. The `addToNetworkGroup` effect is used to place resources in the destination network group. Here's a sample of a policy rule definition with the `addToNetworkGroup` effect.
```json
With network groups, your policy definition includes your conditional expression
} ```
+> [!IMPORTANT]
+> When defining a policy, the `networkGroupId` must be the full resource ID of the target network group as seen in the sample definition. It does not support parameterization in the policy definition.
+>
+>If you need to parameterize the network group, you can utilize an Azure Resource Manager template to create the policy definition and assignment.
+ Learn more about [policy definition structure](../governance/policy/concepts/definition-structure.md). ## Policy assignments
-Similar to Virtual Network Manager configurations, policy definitions don't immediately take effect when you create them. To begin applying, you must create a Policy Assignment, which assigns a definition to evaluate at a given scope. Currently, all resources within the scope are evaluated against the definition. This allows you to have a single reusable definition that you can assign at multiple places for more granular group membership control. Learn more information on the [Assignment Structure](../governance/policy/concepts/assignment-structure.md) for Azure Policy.
+Similar to Virtual Network Manager configurations, policy definitions don't immediately take effect when you create them. To begin applying, you must create a policy Assignment, which assigns a definition to evaluate at a given scope. Currently, all resources within the scope are evaluated against the definition, which allows a single reusable definition that you can assign at multiple places for more granular group membership control. Learn more information on the [Assignment Structure](../governance/policy/concepts/assignment-structure.md) for Azure Policy.
Policy definitions and assignment can be created through with API/PS/CLI or [Azure Policy Portal]().
To set register the needed providers, use [Register-AzResourceProvider](/powersh
### Type filtering
-When configuring your policy definitions, it's recommended to always include a **type** condition to scope it to virtual networks. This allows Policy to filter out non virtual network operations and improve the efficiency of your policy resources.
+When configuring your policy definitions, it's recommended to always include a **type** condition to scope it to virtual networks. This condition allows a policy to filter out non virtual network operations and improve the efficiency of your policy resources.
### Regional slicing
If you're following management group best practices using [Azure management grou
### Deleting an Azure Policy definition associated with a network group
-You may come across instances where you no longer need an Azure Policy definition. This could be when a network group associated with a Policy is deleted, or you have an unused Policy that you no longer need. To delete the Policy, you need to delete the Policy association object, and then delete the policy definition in [Azure Policy](../governance/policy/tutorials/create-custom-policy-definition.md#clean-up-resources). Once this has been completed, the definition can't be reused or re-referenced by name when associating a new definition to a network group.
+You may come across instances where you no longer need an Azure Policy definition. Instances include when a network group associated with a policy is deleted, or you have an unused policy that you no longer need. To delete the policy, you need to delete the policy association object, and then delete the policy definition in [Azure Policy](../governance/policy/tutorials/create-custom-policy-definition.md#clean-up-resources). Once deletion has been completed, the definition name can't be reused or re-referenced when associating a new definition to a network group.
## Next steps
virtual-network-manager Concept Event Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-event-logs.md
+
+ Title: Event log options for Azure Virtual Network Manager
+description: This article covers the event log options for Azure Virtual Network Manager.
++++ Last updated : 04/13/2023++
+# Event log options for Azure Virtual Network Manager
+
+Azure Virtual Network Manager uses Azure Monitor for data collection and analysis like many other Azure services. Azure Virtual Network Manager provides event logs for each network manager. You can store and view event logs with Azure MonitorΓÇÖs Log Analytics tool in the Azure portal, and through a storage account. You may also send these logs to an event hub or partner solution.
+
+## Supported log categories
+
+Azure Virtual Network Manager currently provides the following log categories:
+- Network group membership change
+ - Track when a particular virtual networkΓÇÖs network group membership is modified. In other words, a log is emitted when a virtual network is added to or removed from a network group. This can be used to trace network group membership changes over time and to capture a snapshot of a particular virtual networkΓÇÖs network group membership.
+
+## Network group membership change attributes
+
+This category emits one log per network group membership change. So, when a virtual network is added to or removed from a network group, a log is emitted correlating to that single addition or removal for that particular virtual network. The following attributes correspond to the logs that would be sent to your storage account; Log Analytics logs have slightly different attributes.
+
+| Attribute | Description |
+|--|-|
+| time | Datetime when the event was logged. |
+| resourceId | Resource ID of the network manager. |
+| location | Location of the virtual network resource. |
+| operationName | Operation that resulted in the VNet being added or removed. Always the Microsoft.Network/virtualNetworks/networkGroupMembership/write operation. |
+| category | Category of this log. Always NetworkGroupMembershipChange. |
+| resultType | Indicates successful or failed operation. |
+| correlationId | GUID that can help relate or debug logs. |
+| level | Always Info. |
+| properties | Collection of properties of the log. |
+
+Within the `properties` attribute are several nested attributes:
+
+| properties attributes | Description |
+|--|-|
+| Message | Basic success or failure message. |
+| MembershipId | Default membership ID of the virtual network. |
+| GroupMemberships | Collection of what network groups the virtual network belongs to. There may be multiple `NetworkGroupId` and `Sources` listed within this property since a virtual network can belong to multiple network groups simultaneously. |
+| MemberResourceIds | Resource ID of the virtual network that was added to or removed from a network group. |
+
+Within the `GroupMemberships` attribute are several nested attributes:
+
+| GroupMemberships attributes | Description |
+|--|-|
+| NetworkGroupId | ID of a network group the virtual network belongs to. |
+| Sources | Collection of how the virtual network is a member of the network group. |
+
+Within the `Sources` attribute are several nested attributes:
+
+| Sources attributes | Description |
+|-|-|
+| Type | Denotes whether the virtual network was added manually (StaticMembership) or conditionally via Azure Policy (Policy). |
+| StaticMemberId | If the Type value is StaticMembership, this property appears. |
+| PolicyAssignmentId | If the Type value is Policy, this property appears. ID of the Azure Policy assignment that associates the Azure Policy definition to the network group. |
+| PolicyDefinitionId | If the Type value is Policy, this property appears. ID of the Azure Policy definition that contains the conditions for the network groupΓÇÖs membership. |
+
+## Accessing logs
+
+Depending on how you consume event logs, you need to set up a Log Analytics workspace or a storage account for storing your log events.
+- Learn to [create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
+- Learn to [create a storage account](../storage/common/storage-account-create.md).
+
+When setting up a Log Analytics workspace or a storage account, you need to select a region. If youΓÇÖre using a storage account, it needs to be in the same region of the virtual network manager youΓÇÖre accessing logs from. If youΓÇÖre using a Log Analytics workspace, it can be in any region.
+
+The network manager accessing the events isn't required to be in the same subscription as the Log Analytics workspace or the storage account used for storage, but permissions may restrict your ability to access logs across different subscriptions.
+
+> [!NOTE]
+> At least one virtual network must be added or removed from a network group in order to generate logs. A log will generate for this event a couple minutes after network group membership change occurs.
+
+## Next steps
+- Learn to Configure Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.
+- Learn more about [network groups](concept-network-groups.md) in Azure Virtual Network Manager.
+
virtual-network-manager How To Configure Event Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-event-logs.md
+
+ Title: Configure event logs for Azure Virtual Network Manager
+description: This article describes how to configure and view event logs for Azure Virtual Network Manager. This includes how to access event logs in a Log Analytics workspace and a storage account.
++++ Last updated : 04/13/2023++
+# Configure event logs for Azure Virtual Network Manager
+
+When configurations are changed in Azure Virtual Network Manager, this can affect virtual networks that are associated with network groups in your instance. With Azure Monitor, you can monitor Azure Virtual Network Manager for virtual network changes.
+
+In this article, you learn how to monitor Azure Virtual Network Manager for virtual network changes with Log Analytics or a storage account.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed instance of [Azure Virtual Network Manager](./create-virtual-network-manager-portal.md) in your subscription, with managed virtual networks.
+- You deployed either a [Log Analytics workspace](../azure-monitor/essentials/tutorial-resource-logs.md#create-a-log-analytics-workspace) or a [storage account](../storage/common/storage-account-create.md) to store event logs and observe data related to Azure Virtual Network Manager.
+
+## Configure Diagnostic Settings
+
+Depending on how you consume event logs, you need to set up a Log Analytics workspace or a storage account for storing your log events. These are as storage targets when configuring diagnostic settings for Azure Virtual Network Manager. Once you have configured your diagnostic settings, you can view the event logs in the Log Analytics workspace or storage account.
+
+> [!NOTE]
+> At least one virtual network must be added or removed from a network group in order to generate logs. A log will generate for this event a couple minutes after network group membership change occurs.
+### Configure event logs with Log Analytics
+
+Log analytics is one option for storing event logs. In this task, you configure your Azure Virtual Network Manager Instance to use a Log Analytics workspace. This task assumes you have already deployed a Log Analytics workspace. If you haven't, see [Create a Log Analytics workspace](../azure-monitor/essentials/tutorial-resource-logs.md#create-a-log-analytics-workspace).
+
+1. Navigate to the network manager you want to obtain the logs of.
+1. Under the **Monitoring** in the left pane, select the **Diagnostic settings**.
+1. Select **+ Add diagnostic setting** and enter a diagnostic setting name.
+1. Under **Logs**, select **Network Group Membership Change**.
+1. Under **Destination details**, select **Send to Log Analytics** and choose your subscription and Log Analytics workspace from the dropdown menus.
+
+ :::image type="content" source="media/how-to-configure-event-logging/log-analytics-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings page for setting up Log Analytics workspace.":::
+
+1. Select **Save** and close the window.
+
+### Configure event logs with a storage account
+
+A storage account is another option for storing event logs. In this task, you configure your Azure Virtual Network Manager Instance to use a storage account. This task assumes you have already deployed a storage account. If you haven't, see [Create a storage account](../storage/common/storage-account-create.md).
+
+1. Navigate to the network manager you want to obtain the logs of.
+1. Under the **Monitoring** in the left pane, select the **Diagnostic settings**.
+1. Select **+ Add diagnostic setting** and enter a diagnostic setting name.
+1. Under **Destination details**, select **Send to storage account** and choose your subscription and storage account from the dropdown menus.
+1. Under **Logs**, select **Network Group Membership Change** and enter a retention period.
+
+ :::image type="content" source="media/how-to-configure-event-logging/storage-account-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings for storage account.":::
+
+1. Select **Save** and close the window.
+
+## View Azure Virtual Network Manager event logs
+
+In this task, you access the event logs for your Azure Virtual Network Manager instance.
+
+1. Under the **Monitoring** in the left pane, select the **Logs**.
+1. In the **Diagnostics** window, select **Run** or **Load to editor** under **Get recent Network Group Membership Changes**.
+
+ :::image type="content" source="media/how-to-configure-event-logging/run-query.png" alt-text="Screenshot of Run and Load to editor buttons in the diagnostics window.":::
+
+1. If you choose **Run**, the **Results** tab displays the event logs, and you can expand each log to view the details.
+
+ :::image type="content" source="media/how-to-configure-event-logging/workspace-log-details.png" alt-text="Screenshot of the event log details from the defined query.":::
+
+1. When completed reviewing the logs, close the window and select **ok** to discard changes.
+
+ > [!NOTE]
+ > When you close the **Query editor** window, you will be be returned to the **Azure Home** page. If you need to return to the **Logs** page, browse to your virtual network manager instance, and select **Logs** under the **Monitoring** in the left pane.
+
+1. If you choose **Load to editor**, the **Query editor** window displays the query. Choose **Run** to display the event logs and you can expand each log to view the details.
+
+ :::image type="content" source="media/how-to-configure-event-logging/workspace-log-details.png" alt-text="Screenshot of log details.":::
+1. Close the window and select **ok** to discard changes.
+
+## Next steps
+
+- Learn about [Security admin rules](concept-security-admins.md)
+- Learn how to [Use queries in Azure Monitor Log Analytics](../azure-monitor/logs/queries.md)
+- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
virtual-network Routing Preference Azure Kubernetes Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-azure-kubernetes-service-cli.md
Last updated 10/01/2021-+ ms.devlang: azurecli
virtual-network Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-metrics.md
Reasons for why you may see failed connections:
- If you're seeing a pattern of failed connections for your NAT gateway resource, there could be multiple possible reasons. See the NAT gateway [troubleshooting guide](./troubleshoot-nat.md) to help you further diagnose.
-### Data path availability (Preview)
+### Data path availability
The data path availability metric measures the status of the NAT gateway resource over time. This metric informs on whether or not NAT gateway is available for directing outbound traffic to the internet. This metric is a reflection of the health of the Azure infrastructure.
virtual-network Tutorial Connect Virtual Networks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-portal.md
virtual-network
Last updated 06/24/2022 -+ # Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network.
virtual-network Tutorial Create Route Table Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-portal.md
virtual-network
Last updated 06/27/2022 -+ # Customer intent: I want to route traffic from one subnet, to a different subnet, through a network virtual appliance.
virtual-network Tutorial Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic.md
Last updated 06/28/2022 -+ # Customer intent: I want to filter network traffic to virtual machines that perform similar functions, such as web servers.
virtual-network Tutorial Restrict Network Access To Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md
virtual-network Last updated 06/29/2022-+ # Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account.