Updates from: 06/24/2021 03:07:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Application Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/application-types.md
In this flow, the application executes [policies](user-flow-overview.md) and rec
Applications that contain long-running processes or that operate without the presence of a user also need a way to access secured resources such as web APIs. These applications can authenticate and get tokens by using the application's identity (rather than a user's delegated identity) and by using the OAuth 2.0 client credentials flow. Client credential flow is not the same as on-behalf-flow and on-behalf-flow should not be used for server-to-server authentication.
-Although the OAuth 2.0 client credentials grant flow is not currently directly supported by the Azure AD B2C authentication service, you can set up client credential flow using Azure AD and the Microsoft identity platform /token endpoint for an application in your Azure AD B2C tenant. An Azure AD B2C tenant shares some functionality with Azure AD enterprise tenants.
+Although the OAuth 2.0 client credentials grant flow is not currently directly supported by the Azure AD B2C authentication service, you can set up client credential flow using Azure AD and the Microsoft identity platform /token (https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token) endpoint for an application in your Azure AD B2C tenant. An Azure AD B2C tenant shares some functionality with Azure AD enterprise tenants.
To set up client credential flow, see [Azure Active Directory v2.0 and the OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). A successful authentication results in the receipt of a token formatted so that it can be used by Azure AD as described in [Azure AD token reference](../active-directory/develop/id-tokens.md).
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/location-condition.md
For the next 24 hours, if the user is still accessing the resource and granted t
A Conditional Access policy with GPS-based named locations in report-only mode prompts users to share their GPS location, even though they are not blocked from signing in. > [!IMPORTANT]
-> Users may receive prompts every hour letting them know that Azure AD is checking their location in the Authenticator app. The preview should only be used to protect very sensitive apps where this behavior is acceptable or where access needs to be restricted to a specific country.
+> Users may receive prompts every hour letting them know that Azure AD is checking their location in the Authenticator app. The preview should only be used to protect very sensitive apps where this behavior is acceptable or where access needs to be restricted to a specific country/region.
#### Include unknown countries/regions
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/app-objects-and-service-principals.md
Title: Apps & service principals in Azure AD | Azure
+ Title: Apps & service principals in Azure AD
description: Learn about the relationship between application and service principal objects in Azure Active Directory.
Previously updated : 04/16/2021 Last updated : 06/23/2021 -+ # Application and service principal objects in Azure Active Directory
-This article describes application registration, application objects, and service principals in Azure Active Directory: what they are, how they're used, and how they are related to each other. A multi-tenant example scenario is also presented to illustrate the relationship between an application's application object and corresponding service principal objects.
+This article describes application registration, application objects, and service principals in Azure Active Directory (Azure AD): what they are, how they're used, and how they are related to each other. A multi-tenant example scenario is also presented to illustrate the relationship between an application's application object and corresponding service principal objects.
## Application registration
-In order to delegate Identity and Access Management functions to Azure AD, an application must be registered with an Azure AD [tenant](developer-glossary.md#tenant). When you register your application with Azure AD, you are creating an identity configuration for your application that allows it to integrate with Azure AD. When you register an app in the [Azure portal][AZURE-Portal], you choose whether it's a single tenant (only accessible in your tenant) or multi-tenant (accessible in other tenants) and can optionally set a redirect URI (where the access token is sent to).
-For step-by-step instructions on registering an app, see the [app registration quickstart](quickstart-register-app.md).
+To delegate Identity and Access Management functions to Azure AD, an application must be registered with an Azure AD tenant. When you register your application with Azure AD, you're creating an identity configuration for your application that allows it to integrate with Azure AD. When you register an app in the Azure portal, you choose whether it's a single tenant (only accessible in your tenant) or multi-tenant (accessible in other tenants) and can optionally set a redirect URI (where the access token is sent to). For step-by-step instructions on registering an app, see the [app registration quickstart](quickstart-register-app.md).
-When you've completed the app registration, you have a globally unique instance of the app (the [application object](#application-object)) which lives within your home tenant or directory. You also have a globally unique ID for your app (the app or client ID). In the portal, you can then add secrets or certificates and scopes to make your app work, customize the branding of your app in the sign-in dialog, and more.
+When you've completed the app registration, you have a globally unique instance of the app (the application object) which lives within your home tenant or directory. You also have a globally unique ID for your app (the app or client ID). In the portal, you can then add secrets or certificates and scopes to make your app work, customize the branding of your app in the sign-in dialog, and more.
-If you register an application in the portal, an application object as well as a service principal object are automatically created in your home tenant. If you register/create an application using the Microsoft Graph APIs, creating the service principal object is a separate step.
+If you register an application in the portal, an application object as well as a service principal object are automatically created in your home tenant. If you register/create an application using the Microsoft Graph APIs, creating the service principal object is a separate step.
## Application object+ An Azure AD application is defined by its one and only application object, which resides in the Azure AD tenant where the application was registered (known as the application's "home" tenant). An application object is used as a template or blueprint to create one or more service principal objects. A service principal is created in every tenant where the application is used. Similar to a class in object-oriented programming, the application object has some static properties that are applied to all the created service principals (or application instances). The application object describes three aspects of an application: how the service can issue tokens in order to access the application, resources that the application might need to access, and the actions that the application can take.
-The **App registrations** blade in the [Azure portal][AZURE-Portal] is used to list and manage the application objects in your home tenant.
+You can use the **App registrations** blade in the [Azure portal][AZURE-Portal] to list and manage the application objects in your home tenant.
![App registrations blade](./media/app-objects-and-service-principals/app-registrations-blade.png) The Microsoft Graph [Application entity][MS-Graph-App-Entity] defines the schema for an application object's properties. ## Service principal object+ To access resources that are secured by an Azure AD tenant, the entity that requires access must be represented by a security principal. This requirement is true for both users (user principal) and applications (service principal). The security principal defines the access policy and permissions for the user/application in the Azure AD tenant. This enables core features such as authentication of the user/application during sign-in, and authorization during resource access.
-There are three types of service principal: application, managed identity, and legacy.
+There are three types of service principal:
-The first type of service principal is the local representation, or application instance, of a global application object in a single tenant or directory. In this case, a service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+- **Application** - The type of service principal is the local representation, or application instance, of a global application object in a single tenant or directory. In this case, a service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
-When an application is given permission to access resources in a tenant (upon registration or [consent](developer-glossary.md#consent)), a service principal object is created. You can also create service principal objects in a tenant using [Azure PowerShell](howto-authenticate-service-principal-powershell.md), [Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli), [Microsoft Graph](/graph/api/serviceprincipal-post-serviceprincipals?tabs=http), the [Azure portal][AZURE-Portal], and other tools. When using the portal, a service principal is created automatically when you register an application.
+ When an application is given permission to access resources in a tenant (upon registration or consent), a service principal object is created. When you register an application using the Azure portal, a service principal is created automatically. You can also create service principal objects in a tenant using Azure PowerShell, Azure CLI, Microsoft Graph, and other tools.
-The second type of service principal is used to represent a [managed identity](../managed-identities-azure-resources/overview.md). Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. When a managed identity is enabled, a service principal representing that managed identity is created in your tenant. Service principals representing managed identities can be granted access and permissions, but cannot be updated or modified directly.
+- **Managed identity** - This type of service principal is used to represent a [managed identity](../managed-identities-azure-resources/overview.md). Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. When a managed identity is enabled, a service principal representing that managed identity is created in your tenant. Service principals representing managed identities can be granted access and permissions, but cannot be updated or modified directly.
-The third type of service principal represents a legacy app (an app created before app registrations were introduced or created through legacy experiences). A legacy service principal can have credentials, service principal names, reply URLs, and other properties which are editable by an authorized user, but does not have an associated app registration. The service principal can only be used in the tenant where it was created.
+- **Legacy** - This type of service principal represents a legacy app, which is an app created before app registrations were introduced or an app created through legacy experiences. A legacy service principal can have credentials, service principal names, reply URLs, and other properties that an authorized user can edit, but does not have an associated app registration. The service principal can only be used in the tenant where it was created.
The Microsoft Graph [ServicePrincipal entity][MS-Graph-Sp-Entity] defines the schema for a service principal object's properties.
-The **Enterprise applications** blade in the portal is used to list and manage the service principals in a tenant. You can see the service principal's permissions, user consented permissions, which users have done that consent, sign in information, and more.
+You can use the **Enterprise applications** blade in the Azure portal to list and manage the service principals in a tenant. You can see the service principal's permissions, user consented permissions, which users have done that consent, sign in information, and more.
![Enterprise apps blade](./media/app-objects-and-service-principals/enterprise-apps-blade.png) ## Relationship between application objects and service principals
-The application object is the *global* representation of your application for use across all tenants, and the service principal is the *local* representation for use in a specific tenant.
+The application object is the *global* representation of your application for use across all tenants, and the service principal is the *local* representation for use in a specific tenant. The application object serves as the template from which common and default properties are *derived* for use in creating corresponding service principal objects.
-The application object serves as the template from which common and default properties are *derived* for use in creating corresponding service principal objects. An application object therefore has a 1:1 relationship with the software application, and a 1:many relationship with its corresponding service principal object(s).
+An application object has:
+
+- A 1:1 relationship with the software application, and
+- A 1:many relationship with its corresponding service principal object(s).
A service principal must be created in each tenant where the application is used, enabling it to establish an identity for sign-in and/or access to resources being secured by the tenant. A single-tenant application has only one service principal (in its home tenant), created and consented for use during application registration. A multi-tenant application also has a service principal created in each tenant where a user from that tenant has consented to its use. ### Consequences of modifying and deleting applications
-Any changes that you make to your application object are also reflected in its service principal object in the application's home tenant only (the tenant where it was registered). This means that deleting an application object will also delete its home tenant service principal object. However, restoring that application object will not restore its corresponding service principal. For multi-tenant applications, changes to the application object are not reflected in any consumer tenants' service principal objects, until the access is removed through the [Application Access Panel](https://myapps.microsoft.com) and granted again.
+
+Any changes that you make to your application object are also reflected in its service principal object in the application's home tenant only (the tenant where it was registered). This means that deleting an application object will also delete its home tenant service principal object. However, restoring that application object will not restore its corresponding service principal. For multi-tenant applications, changes to the application object are not reflected in any consumer tenants' service principal objects until the access is removed through the [Application Access Panel](https://myapps.microsoft.com) and granted again.
## Example
-The following diagram illustrates the relationship between an application's application object and corresponding service principal objects, in the context of a sample multi-tenant application called **HR app**. There are three Azure AD tenants in this example scenario:
+The following diagram illustrates the relationship between an application's application object and corresponding service principal objects in the context of a sample multi-tenant application called **HR app**. There are three Azure AD tenants in this example scenario:
- **Adatum** - The tenant used by the company that developed the **HR app** - **Contoso** - The tenant used by the Contoso organization, which is a consumer of the **HR app**
In this example scenario:
## Next steps -- You can use the [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to query both the application and service principal objects.-- You can access an application's application object using the Microsoft Graph API, the [Azure portal's][AZURE-Portal] application manifest editor, or [Azure AD PowerShell cmdlets](/powershell/azure/), as represented by its OData [Application entity][MS-Graph-App-Entity].-- You can access an application's service principal object through the Microsoft Graph API or [Azure AD PowerShell cmdlets](/powershell/azure/), as represented by its OData [ServicePrincipal entity][MS-Graph-Sp-Entity].
+Learn how to create a service principal:
-<!--Image references-->
+- [Using the Azure portal](howto-create-service-principal-portal.md)
+- [Using Azure PowerShell](howto-authenticate-service-principal-powershell.md)
+- [Using Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli)
+- [Using Microsoft Graph](/graph/api/serviceprincipal-post-serviceprincipals?tabs=http) and then use [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to query both the application and service principal objects.
<!--Reference style links --> [MS-Graph-App-Entity]: /graph/api/resources/application
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reply-url.md
Title: Redirect URI (reply URL) restrictions | Azure
+ Title: Redirect URI (reply URL) restrictions | Azure AD
description: A description of the restrictions and limitations on redirect URI (reply URL) format enforced by the Microsoft identity platform. Previously updated : 11/23/2020 Last updated : 06/23/2021 -+ + # Redirect URI (reply URL) restrictions and limitations A redirect URI, or reply URL, is the location where the authorization server sends the user once the app has been successfully authorized and granted an authorization code or access token. The authorization server sends the code or token to the redirect URI, so it's important you register the correct location as part of the app registration process.
You cannot, however, use the **Redirect URIs** text box in the Azure portal to a
:::image type="content" source="media/reply-url/portal-01-no-http-loopback-redirect-uri.png" alt-text="Error dialog in Azure portal showing disallowed http-based loopback redirect URI":::
-To add a redirect URI that uses the `http` scheme with the `127.0.0.1` loopback address, you must currently modify the [replyUrlsWithType](reference-app-manifest.md#replyurlswithtype-attribute) attribute in the [application manifest](reference-app-manifest.md).
+To add a redirect URI that uses the `http` scheme with the `127.0.0.1` loopback address, you must currently modify the [replyUrlsWithType attribute in the application manifest](reference-app-manifest.md#replyurlswithtype-attribute).
## Restrictions on wildcards in redirect URIs
Wildcard URIs like `https://*.contoso.com` may seem convenient, but should be av
Wildcard URIs are currently unsupported in app registrations configured to sign in personal Microsoft accounts and work or school accounts. Wildcard URIs are allowed, however, for apps that are configured to sign in only work or school accounts in an organization's Azure AD tenant.
-To add redirect URIs with wildcards to app registrations that sign in work or school accounts, use the application manifest editor in [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) in the Azure portal. Though it's possible to set a redirect URI with a wildcard by using the manifest editor, we *strongly* recommend you adhere to [section 3.1.2 of RFC 6749](https://tools.ietf.org/html/rfc6749#section-3.1.2) and use only absolute URIs.
+To add redirect URIs with wildcards to app registrations that sign in work or school accounts, use the application manifest editor in **App registrations** in the Azure portal. Though it's possible to set a redirect URI with a wildcard by using the manifest editor, we *strongly* recommend you adhere to section 3.1.2 of RFC 6749. and use only absolute URIs.
-If your scenario requires more redirect URIs than the maximum limit allowed, consider the following [state parameter approach](#use-a-state-parameter) instead of adding a wildcard redirect URI.
+If your scenario requires more redirect URIs than the maximum limit allowed, consider the following state parameter approach instead of adding a wildcard redirect URI.
#### Use a state parameter
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
The sample currently lets MSAL.Python produce the authorization-code URL and han
# [ASP.NET Core](#tab/aspnetcore)
-Microsoft.Identity.Web simplifies your code by setting the correct OpenID Connect settings, subscribing to the code received event, and redeeming the code. No additional code is required to redeem the authorization code. See [Microsoft.Identity.Web source code](https://github.com/AzureAD/microsoft-identity-web/blob/c29f1a7950b940208440bebf0bcb524a7d6bee22/src/Microsoft.Identity.Web/WebAppExtensions/WebAppCallsWebApiAuthenticationBuilderExtensions.cs#L140) for details on how this works.
+Microsoft.Identity.Web simplifies your code by setting the correct OpenID Connect settings, subscribing to the code received event, and redeeming the code. No extra code is required to redeem the authorization code. See [Microsoft.Identity.Web source code](https://github.com/AzureAD/microsoft-identity-web/blob/c29f1a7950b940208440bebf0bcb524a7d6bee22/src/Microsoft.Identity.Web/WebAppExtensions/WebAppCallsWebApiAuthenticationBuilderExtensions.cs#L140) for details on how this works.
# [ASP.NET](#tab/aspnet)
services.AddDistributedSqlServerCache(options =>
}); ```
-For details about the token-cache providers, see also Microsoft.Identity.Web's [Token cache serialization](https://aka.ms/ms-id-web/token-cache-serialization) article, as well as the [ASP.NET Core Web app tutorials | Token caches](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-2-TokenCache) phase of the web apps tutorial.
+For details about the token-cache providers, see also Microsoft.Identity.Web's [Token cache serialization](https://aka.ms/ms-id-web/token-cache-serialization) article, and the [ASP.NET Core Web app tutorials | Token caches](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-2-TokenCache) phase of the web apps tutorial.
# [ASP.NET](#tab/aspnet) The token-cache implementation for web apps or web APIs is different from the implementation for desktop applications, which is often [file based](scenario-desktop-acquire-token.md#file-based-token-cache).
-The web-app implementation can use the ASP.NET session or the server memory. For example, see how the cache implementation is hooked after the creation of the MSAL.NET application in [MsalAppBuilder.cs#L39-L51](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/a2da310539aa613b77da1f9e1c17585311ab22b7/WebApp/Utils/MsalAppBuilder.cs#L39-L51):
+The web-app implementation can use the ASP.NET session or the server memory. For example, see how the cache implementation is hooked after the creation of the MSAL.NET application in [MsalAppBuilder.cs#L39-L51](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/79e3e1f084cd78f9170a8ca4077869f217735a1a/WebApp/Utils/MsalAppBuilder.cs#L57-L58):
++
+First, to use these implementations:
+- add the Microsoft.Identity.Web Nuget package. These token cache serializers are not brought in MSAL.NET directly to avoid unwanted dependencies. In addition to a higher level for ASP.NET Core, Microsoft.Identity.Web brings classes that are helpers for MSAL.NET,
+- In your code, use the Microsoft.Identity.Web namespace:
+
+ ```csharp
+ #using Microsoft.Identity.Web
+ ```
+- Once you have built your confidential client application, add the token cache serialization of your choice.
```csharp public static class MsalAppBuilder {
- // Omitted code
- public static IConfidentialClientApplication BuildConfidentialClientApplication(ClaimsPrincipal currentUser)
+ private static IConfidentialClientApplication clientapp;
+
+ public static IConfidentialClientApplication BuildConfidentialClientApplication()
+ {
+ if (clientapp == null)
{
- IConfidentialClientApplication clientapp = ConfidentialClientApplicationBuilder.Create(AuthenticationConfig.ClientId)
+ clientapp = ConfidentialClientApplicationBuilder.Create(AuthenticationConfig.ClientId)
.WithClientSecret(AuthenticationConfig.ClientSecret) .WithRedirectUri(AuthenticationConfig.RedirectUri) .WithAuthority(new Uri(AuthenticationConfig.Authority)) .Build();
- // After the ConfidentialClientApplication is created, we overwrite its default UserTokenCache with our implementation.
- MSALPerUserMemoryTokenCache userTokenCache = new MSALPerUserMemoryTokenCache(clientapp.UserTokenCache, currentUser ?? ClaimsPrincipal.Current);
-
- return clientapp;
+ // After the ConfidentialClientApplication is created, we overwrite its default UserTokenCache serialization with our implementation
+ clientapp.AddInMemoryTokenCache();
+ }
+ return clientapp;
} ```
+Instead of `clientapp.AddInMemoryTokenCache()`, you can also use more advanced cache serialization implementations like Redis, SQL, CosmosDB, or distributed memory. Here's an example for Redis:
+
+```csharp
+ clientapp.AddDistributedTokenCache(services =>
+ {
+ services.AddStackExchangeRedisCache(options =>
+ {
+ options.Configuration = "localhost";
+ options.InstanceName = "SampleInstance";
+ });
+ });
+```
+
+For details see [Token cache serialization for MSAL.NET](https://aka.ms/ms-id-web/token-cache-serialization-msal).
+ # [Java](#tab/java) MSAL Java provides methods to serialize and deserialize the token cache. The Java sample handles the serialization from the session, as shown in the `getAuthResultBySilentFlow` method in [AuthHelper.java#L99-L122](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthHelper.java#L99-L122):
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
az login
This command will launch a browser window and a user can sign in using their Azure AD account.
-The following example automatically resolves the appropriate IP address for the VM.
+The following [az ssh](/cli/azure/ssh?view=azure-cli-latest) example automatically resolves the appropriate IP address for the VM.
```azurecli az ssh vm -n myVM -g AzureADLinuxVMPreview
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/troubleshoot.md
If you accidentally deleted the `aad-extensions-app`, you have 30 days to recove
You should now see the restored app in the Azure portal.
+## A guest user was invited succesfully but the email attribute is not populating
+
+Let's say you inadvertently invite a guest user with an email address that matches a user object already in your directory. The guest user object is created, but the email address is added to the `otherMail` property instead of to the `mail` or `proxyAddresses` properties. To avoid this issue, you can search for conflicting user objects in your Azure AD directory by using these PowerShell steps:
+
+1. Open the Azure AD PowerShell module and run `Connect-AzureAD`.
+1. Sign in as a global administrator for the Azure AD tenant that you want to check for duplicate contact objects in.
+1. Run the PowerShell command `Get-AzureADContact -All $true | ? {$_.ProxyAddresses -match 'user@domain.com'}`.
+1. Run the PowerShell command `Get-AzureADContact -All $true | ? {$_.Mail -match 'user@domain.com'}`.
+ ## Next steps [Get support for B2B collaboration](../fundamentals/active-directory-troubleshooting-support-howto.md)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Previously updated : 06/03/2021 Last updated : 06/18/2021
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Teams Devices Administrator](#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | 3d762c5a-1b6c-493f-843e-55a3b42923d4 | > | [Usage Summary Reports Reader](#usage-summary-reports-reader) | Can see only tenant level aggregates in Microsoft 365 Usage Analytics and Productivity Score. | 75934031-6c7e-415a-99d7-48dbd49e875e | > | [User Administrator](#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins. | fe930be7-5e62-47db-91af-98c3a49a38b1 |
+> | [Windows Update Deployment Administrator](#windows-update-deployment-administrator) | Create and manage all aspects of Windows Update deployments through the Windows Update for Business deployment service. | 32696413-001a-46ae-978c-ce0f6b3620d2 |
## Application Administrator
Users with this role can create users, and manage all aspects of users with some
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Windows Update Deployment Administrator
+
+Users in this role can create and manage all aspects of Windows Update deployments through the Windows Update for Business deployment service. The deployment service enables users to define settings for when and how updates are deployed, and specify which updates are offered to groups of devices in their tenant. It also allows users to monitor the update progress.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.windows.updatesDeployments/allEntities/allProperties/allTasks | Read and configure all aspects of Windows Update Service |
+ ## How to understand role permissions The schema for permissions loosely follows the REST format of Microsoft Graph:
active-directory Acunetix 360 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/acunetix-360-tutorial.md
Previously updated : 03/04/2021 Last updated : 06/21/2021
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Acunetix 360 single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Acunetix 360 into Azure AD, you need to add Acun
1. In the **Add from the gallery** section, type **Acunetix 360** in the search box. 1. Select **Acunetix 360** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Acunetix 360 Configure and test Azure AD SSO with Acunetix 360 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Acunetix 360.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
In the **Reply URL** text box, type a URL using the following pattern: `https://online.acunetix360.com/account/assertionconsumerservice/?spId=<SPID>`
In this section, you test your Azure AD single sign-on configuration with follow
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Acunetix 360 for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Acunetix 360 for which you set up the SSO.
You can also use Microsoft My Apps to test the application in any mode. When you click the Acunetix 360 tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Acunetix 360 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). - ## Next steps Once you configure Acunetix 360 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Ibmid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ibmid-tutorial.md
Previously updated : 06/11/2021 Last updated : 06/22/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* IBMid supports **SP and IDP** initiated SSO. * IBMid supports **Just In Time** user provisioning.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+ ## Add IBMid from the gallery To configure the integration of IBMid into Azure AD, you need to add IBMid from the gallery to your list of managed SaaS apps.
Follow these steps to enable Azure AD SSO in the Azure portal.
| Identifier | | - |
+ | Production : |
| `https://ibmlogin.ice.ibmcloud.com/saml/sps/saml20sp/saml20` |
+ | Pre-Production : |
| `https://prepiam.ice.ibmcloud.com/saml/sps/saml20sp/saml20` | |
Follow these steps to enable Azure AD SSO in the Azure portal.
| Reply URL | | - |
+ | Production : |
| `https://login.ibm.com/saml/sps/saml20sp/saml20/login` |
+ | Pre-Production : |
| `https://prepiam.ice.ibmcloud.com/saml/sps/saml20sp/saml20/login` | |
active-directory Lensesio Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/lensesio-tutorial.md
Previously updated : 07/02/2020 Last updated : 06/21/2021
In this tutorial, you'll learn how to integrate the [Lenses.io](https://lenses.i
* Enable your users to be automatically signed-in to Lenses with their Azure AD accounts. * Manage your accounts in one central location: the Azure portal.
-To learn more about software as a service (SaaS) app integration with Azure AD, see [What is application access and single sign-on with Azure AD](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
* An instance of a Lenses portal. You can choose from a number of [deployment options](https://lenses.io/product/deployment/). * A Lenses.io [license](https://lenses.io/product/pricing/) that supports single sign-on (SSO).
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you'll configure and test Azure AD SSO in a test environment. * Lenses.io supports service provider (SP) initiated SSO.
-* You can enforce session control after you configure Lenses.io. Session control protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
- ## Add Lenses.io from the gallery To configure the integration of Lenses.io into Azure AD, add Lenses.io to your list of managed SaaS apps:
-1. Sign in to the [Azure portal](https://portal.azure.com) by using a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal by using a work or school account, or a personal Microsoft account.
1. On the left pane, select the **Azure Active Directory** service. 1. Go to **Enterprise Applications**, and then select **All Applications**. 1. Select **New application**.
To configure the integration of Lenses.io into Azure AD, add Lenses.io to your l
You'll create a test user called *B.Simon* to configure and test Azure AD SSO with your Lenses.io portal. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lenses.io.
-Complete the following steps:
+Perform the following steps:
1. [Configure Azure AD SSO](#configure-azure-ad-sso) to enable your users to use this feature. 1. [Create an Azure AD test user and group](#create-an-azure-ad-test-user-and-group) to test Azure AD SSO with B.Simon.
Complete the following steps:
Follow these steps to enable Azure AD SSO in the Azure portal:
-1. In the [Azure portal](https://portal.azure.com/), on the **Lenses.io** application integration page, find the **Manage** section, and then select **single sign-on**.
+1. In the Azure portal, on the **Lenses.io** application integration page, find the **Manage** section, and then select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, select the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
![Screenshot that shows the icon for editing basic SAML configuration.](common/edit-urls.png)
-1. In the **Basic SAML Configuration** section, enter values in the following text-entry boxes:
+1. In the **Basic SAML Configuration** section, perform the following steps:
- a. **Sign on URL**: Enter a URL that has the following pattern: `https://<CUSTOMER_LENSES_BASE_URL>`. An example is `https://lenses.my.company.com`.
+ a. **Identifier (Entity ID)**: Enter a URL that has the following pattern: `https://<CUSTOMER_LENSES_BASE_URL>`. An example is `https://lenses.my.company.com`.
- b. **Identifier (Entity ID)**: Enter a URL that has the following pattern: `https://<CUSTOMER_LENSES_BASE_URL>`. An example is `https://lenses.my.company.com`.
+ b. **Reply URL**: Enter a URL that has the following pattern: `https://<CUSTOMER_LENSES_BASE_URL>/api/v2/auth/saml/callback?client_name=SAML2Client`. An example is `https://lenses.my.company.com/api/v2/auth/saml/callback?client_name=SAML2Client`.
- c. **Reply URL**: Enter a URL that has the following pattern: `https://<CUSTOMER_LENSES_BASE_URL>/api/v2/auth/saml/callback?client_name=SAML2Client`. An example is `https://lenses.my.company.com/api/v2/auth/saml/callback?client_name=SAML2Client`.
+ c. **Sign on URL**: Enter a URL that has the following pattern: `https://<CUSTOMER_LENSES_BASE_URL>`. An example is `https://lenses.my.company.com`.
> [!NOTE]
- > These values are not real. Update them with the actual sign-on URL, reply URL, and identifier of the base URL of your Lenses portal instance. See the [Lenses.io SSO documentation](https://docs.lenses.io/install_setup/configuration/security.html#single-sign-on-sso-saml-2-0) for more information.
+ > These values are not real. Update them with the actual Identifier,Reply URL and Sign on URL of the base URL of your Lenses portal instance. See the [Lenses.io SSO documentation](https://docs.lenses.io/install_setup/configuration/security.html#single-sign-on-sso-saml-2-0) for more information.
1. On the **Set up single sign-on with SAML** page, go to the **SAML Signing Certificate** section. Find **Federation Metadata XML**, and then select **Download** to download and save the certificate on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. On the applications list, select **Lenses.io**. 1. On the app overview page, in the **Manage** section, select **Users and groups**.-
- ![Screenshot that shows the "Users and groups" link.](common/users-groups-blade.png)
- 1. Select **Add user**.-
- ![Screenshot that shows the Add User link.](common/add-assign-user.png)
- 1. In the **Add Assignment** dialog box, select **Users and groups**. 1. In the **Users and groups** dialog box, select **B.Simon** from the Users list. Then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog box, choose the appropriate role for the user from the list. Then click the **Select** button at the bottom of the screen.
For more information, see [Azure - Lenses group mapping](https://docs.lenses.io/
## Test SSO
-In this section, test your Azure AD SSO configuration by using the Access Panel.
-
-When you select the Lenses.io tile on the Access Panel, you should be automatically signed in to your Lenses.io portal. For more information, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
--- [Set up SSO in your Lenses.io instance](https://docs.lenses.io/install_setup/configuration/security.html#single-sign-on-sso-saml-2-0)--- [List of tutorials on how to integrate SaaS apps with Azure AD](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and SSO with Azure AD?](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal. This will redirect to Lenses.io Sign-on URL where you can initiate the login flow.
-- [What is conditional access in Azure AD?](../conditional-access/overview.md)
+* Go to Lenses.io Sign-on URL directly and initiate the login flow from there.
-- [Try Lenses.io with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Lenses.io tile in the My Apps, this will redirect to Lenses.io Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Lenses.io with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Lenses.io you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Lexonis Talentscape Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/lexonis-talentscape-tutorial.md
Previously updated : 03/10/2021 Last updated : 06/21/2021
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Lexonis TalentScape single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Lexonis TalentScape supports **SP and IDP** initiated SSO
-* Lexonis TalentScape supports **Just In Time** user provisioning
+* Lexonis TalentScape supports **SP and IDP** initiated SSO.
+* Lexonis TalentScape supports **Just In Time** user provisioning.
-## Adding Lexonis TalentScape from the gallery
+## Add Lexonis TalentScape from the gallery
To configure the integration of Lexonis TalentScape into Azure AD, you need to add Lexonis TalentScape from the gallery to your list of managed SaaS apps.
To configure the integration of Lexonis TalentScape into Azure AD, you need to a
1. In the **Add from the gallery** section, type **Lexonis TalentScape** in the search box. 1. Select **Lexonis TalentScape** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Lexonis TalentScape Configure and test Azure AD SSO with Lexonis TalentScape using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lexonis TalentScape.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<CUSTOMER_NAME>.lexonis.com/`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you test your Azure AD single sign-on configuration with follow
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lexonis TalentScape for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lexonis TalentScape for which you set up the SSO.
You can also use Microsoft My Apps to test the application in any mode. When you click the Lexonis TalentScape tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lexonis TalentScape for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
active-directory Opentext Directory Services Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/opentext-directory-services-tutorial.md
Previously updated : 02/11/2021 Last updated : 06/22/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.opentext.com/<OTDSTENANT>/<TENANTID>/login`
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.opentext.com/<OTDSTENANT>/<TENANTID>/login?authhandler=<HANDLERID>`
+ | Identifier |
+ ||
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
+ |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL |
+ ||
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
+ |
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.opentext.com/<OTDSTENANT>/<TENANTID>/login?authhandler=<HANDLERID>`
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | Sign-on URL |
+ ||
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
+ |
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [OpenText Directory Services Client support team](mailto:support@opentext.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
active-directory Procoresso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/procoresso-tutorial.md
Previously updated : 06/11/2021 Last updated : 06/21/2021 # Tutorial: Azure Active Directory integration with Procore SSO
To configure Azure AD integration with Procore SSO, you need the following items
* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/). * Procore SSO single sign-on enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot shows the Procore company site with Directory selected.](./media/procoresso-tutorial/admin.png)
-3. Paste the values in the boxes as described below-
+3. Paste the values in the boxes as described below.
![Screenshot shows the Add a Person dialog box.](./media/procoresso-tutorial/setting.png)
Please follow the below steps to create a Procore test user on Procore SSO side.
![Screenshot shows the Procore company site with Directory selected from the toolbox.](./media/procoresso-tutorial/directory.png)
-3. Click on **Add a Person** option to open the form and enter perform following options -
+3. Click on **Add a Person** option to open the form and enter perform following options.
![Screenshot shows the Add a person to Boylan Construction where you can enter user information.](./media/procoresso-tutorial/user.png)
active-directory Tutorial List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tutorial-list.md
Previously updated : 03/09/2021 Last updated : 06/23/2021
To find more tutorials, use the table of contents on the left.
| ![logo-Kendis - Azure AD Integration](./medi)| | ![logo-Knowledge Anywhere LMS](./medi)| | ![logo-Litmus](./medi)|
+| ![logo-LogMeIn](./medi)|
| ![logo-Marketo](./medi)| | ![logo-Meraki Dashboard](./medi)| | ![logo-monday.com](./medi)|
To find more tutorials, use the table of contents on the left.
| ![logo-productboard](./medi)| | ![logo-PurelyHR](./medi)| | ![logo-RingCentral](./medi)|
+| ![logo-Saba Cloud](./medi)|
| ![logo-Salesforce](./medi)| | ![logo-SAP Cloud Platform Identity Authentication](./medi)| | ![logo-ScaleX Enterprise](./medi)|
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-files-csi.md
Filesystem
//f149b5a219bd34caeb07de9.file.core.windows.net/pvc-5e5d9980-da38-492b-8581-17e3cad01770 200G 128K 200G 1% /mnt/azurefile ```
+## Use a persistent volume with private Azure Files storage (private endpoint)
+
+If your Azure Files resources are protected with a private endpoint, you must create your own storage class that's customized with the following parameters:
+
+* `resourceGroup`: The resource group where the storage account is deployed.
+* `storageAccount`: The storage account name.
+* `server`: The FQDN of the storage account's private endpoint (for example, `<storage account name>.privatelink.file.core.windows.net`).
+
+Create a file named *private-azure-file-sc.yaml*, and then paste the following example manifest in the file. Replace the valules for `<resourceGroup>` and `<storageAccountName>`.
+
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: private-azurefile-csi
+provisioner: file.csi.azure.com
+allowVolumeExpansion: true
+parameters:
+ resourceGroup: <resourceGroup>
+ storageAccount: <storageAccountName>
+ server: <storageAccountName>.privatelink.file.core.windows.net
+reclaimPolicy: Delete
+volumeBindingMode: Immediate
+mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict # https://linux.die.net/man/8/mount.cifs
+ - nosharesock # reduce probability of reconnect race
+ - actimeo=30 # reduce latency for metadata-heavy workload
+```
+
+Create the storage class by using the [kubectl apply][kubectl-apply] command:
+
+```console
+kubectl apply -f private-azure-file-sc.yaml
+
+storageclass.storage.k8s.io/private-azurefile-csi created
+```
+
+Create a file named *private-pvc.yaml*, and then paste the following example manifest in the file:
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: private-azurefile-pvc
+spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: private-azurefile-csi
+ resources:
+ requests:
+ storage: 100Gi
+```
+
+Create the PVC by using the [kubectl apply][kubectl-apply] command:
+
+```console
+kubectl apply -f private-pvc.yaml
+```
+ ## NFS file shares [Azure Files now has support for NFS v4.1 protocol](../storage/files/storage-files-how-to-create-nfs-shares.md). NFS 4.1 support for Azure Files provides you with a fully managed NFS file system as a service built on a highly available and highly durable distributed resilient storage platform.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-access-restriction-policies.md
Previously updated : 06/02/2021 Last updated : 06/22/2021
For more information about custom CA certificates and certificate authorities, s
### Policy statement ```xml
-<validate-client-certificate>
-    validate-revocation="true|false"
-    validate-trust="true|false"
-    validate-not-before="true|false"
-    validate-not-after="true|false"
-    ignore-error="true|false">
-    <identities>
-        <identity 
-            thumbprint="certificate thumbprint" 
-            serial-number="certificate serial number"
-            common-name="certificate common name" 
-            subject="certificate subject string" 
-            dns-name="certificate DNS name"
-            issuer="certificate issuer"
-            issuer-thumbprint="certificate issuer thumbprint" 
-            issuer-certificate-id="certificate identifier" />
-    </identities>
+<validate-client-certificate
+ validate-revocation="true|false"
+ validate-trust="true|false"
+ validate-not-before="true|false"
+ validate-not-after="true|false"
+ ignore-error="true|false">
+ <identities>
+ <identityΓÇ»
+ thumbprint="certificate thumbprint"
+ serial-number="certificate serial number"
+ common-name="certificate common name"
+ subject="certificate subject string"
+ dns-name="certificate DNS name"
+ issuer-subject="certificate issuer"
+ issuer-thumbprint="certificate issuer thumbprint"
+ issuer-certificate-id="certificate identifier"ΓÇ»/>
+ </identities>
</validate-client-certificate> ``` ### Example
-The following example validates a client certificate to match the policy's default validation rules and checks whether the subject and issuer match specified values.
+The following example validates a client certificate to match the policy's default validation rules and checks whether the subject and issuer name match specified values.
```xml
-<validate-client-certificate>
-    validate-revocation="true"
-    validate-trust="true"
-    validate-not-before="true"
-    validate-not-after="true"
-    ignore-error="false"
-    <identities>
-        <identity 
+<validate-client-certificate
+ validate-revocation="true"
+ validate-trust="true"
+ validate-not-before="true"
+ validate-not-after="true"
+ ignore-error="false">
+ <identities>
+ <identity
subject="C=US, ST=Illinois, L=Chicago, O=Contoso Corp., CN=*.contoso.com"
- issuer="C=BE, O=FabrikamSign nv-sa, OU=Root CA, CN=FabrikamSign Root CA" />
-    </identities>
+ issuer-subject="C=BE, O=FabrikamSign nv-sa, OU=Root CA, CN=FabrikamSign Root CA" />
+ </identities>
</validate-client-certificate> ```
The following example validates a client certificate to match the policy's defau
| common-name | Certificate common name (part of Subject string). | no | N/A | | subject | Subject string. Must follow format of Distinguished Name. | no | N/A | | dns-name | Value of dnsName entry inside Subject Alternative Name claim. | no | N/A |
-| issuer | IssuerΓÇÖs subject. Must follow format of Distinguished Name. | no | N/A |
+| issuer-subject | IssuerΓÇÖs subject. Must follow format of Distinguished Name. | no | N/A |
| issuer-thumbprint | Issuer thumbprint. | no | N/A |
-| issuer-certificate-id | Identifier of existing Certificate entity representing IssuerΓÇÖs public key. | no | N/A |
+| issuer-certificate-id | Identifier of existing certificate entity representing the issuerΓÇÖs public key. Mutually exclusive with other issuer attributes. | no | N/A |
### Usage
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-deploy-multi-region.md
Azure API Management supports multi-region deployment, which enables API publish
A new Azure API Management service initially contains only one [unit][unit] in a single Azure region, the Primary region. Additional units can be added to the Primary or Secondary regions. An API Management gateway component is deployed to every selected Primary and Secondary region. Incoming API requests are automatically directed to the closest region. If a region goes offline, the API requests will be automatically routed around the failed region to the next closest gateway. > [!NOTE]
-> Only the gateway component of API Management is deployed to all regions. The service management component and developer portal are hosted in the Primary region only. Therefore, in case of the Primary region outage, access to the developer portal and ability to change configuration (e.g. adding APIs, applying policies) will be impaired until the Primary region comes back online. While the Primary region is offline, available Secondary regions will continue to serve the API traffic using the latest configuration available to them. Optionally enable [zone redundacy](zone-redundancy.md) to improve the availability and resiliency of the Primary or Secondary regions.
+> Only the gateway component of API Management is deployed to all regions. The service management component and developer portal are hosted in the Primary region only. Therefore, in case of the Primary region outage, access to the developer portal and ability to change configuration (e.g. adding APIs, applying policies) will be impaired until the Primary region comes back online. While the Primary region is offline, available Secondary regions will continue to serve the API traffic using the latest configuration available to them. Optionally enable [zone redundancy](zone-redundancy.md) to improve the availability and resiliency of the Primary or Secondary regions.
>[!IMPORTANT] > The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo. For all other regions, customer data is stored in Geo.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-connect-to-azure-storage.md
az webapp config storage-account list --resource-group <resource-group> --name <
| Setting | Description | |-|-|
- | **Name** | Name of the mount configuration. |
+ | **Name** | Name of the mount configuration. Spaces are not allowed. |
| **Configuration options** | Select **Basic** if the storage account is not using [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) or [private endpoints](../storage/common/storage-private-endpoints.md). Otherwise, select **Advanced**. | | **Storage accounts** | Azure Storage account. | | **Storage type** | Select the type based on the storage you want to mount. Azure Blobs only supports read-only access. |
app-service Overview Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-manage-costs.md
description: Learn how to plan for and manage costs for Azure App Service by usi
Previously updated : 01/01/2021 Last updated : 06/23/2021
Last updated 01/01/2021
This article describes how you plan for and manage costs for Azure App Service. First, you use the Azure pricing calculator to help plan for App Service costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. After you've started using App Service resources, use [Cost Management](../cost-management-billing/index.yml?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure App Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for App Service, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
-## Relevant costs for App Service
+## Understand the full billing model for Azure App Service
-App Service runs on Azure infrastructure that accrues cost. It's important to understand that additional infrastructure might accrue cost. You must manage that cost when you make changes to deployed resources.
+Azure App Service runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
-### Costs that accrue with Azure App Service
+### How you're charged for Azure App Service
-Depending on which feature you use in App Service, the following cost-accruing resources may be created:
+When you create or use App Service resources, you're charged for the following meters:
-- **App Service plan** Required to host an App Service app.-- **Isolated tier** A [Virtual Network](../virtual-network/index.yml) is required for an App Service environment.-- **Backup** A [Storage account](../storage/index.yml) is required to make backups.-- **Diagnostic logs** You can select [Storage account](../storage/index.yml) as the logging option, or integrate with [Azure Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md).-- **App Service certificates** Certificates you purchase in Azure must be maintained in [Azure Key Vault](../key-vault/index.yml).
+- You're charged an hourly rate based on the pricing tier of your App Service plan, prorated to the second.
+- The charge is applied to each scaled-out instance in your plan, based on the amount of time that the VM instance is allocated.
Other cost resources for App Service are (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/) for details):
Other cost resources for App Service are (see [App Service pricing](https://azur
- [App Service certificates](configure-ssl-certificate.md#import-an-app-service-certificate) One-time charge at the time of purchase. If you have multiple subdomains to secure, you can reduce cost by purchasing one wildcard certificate instead of multiple standard certificates. - [IP-based certificate bindings](configure-ssl-bindings.md#create-binding) The binding is configured on a certificate at the app level. Costs are accrued for each binding. For **Standard** tier and above, the first IP-based binding is not charged.
+At the end of your billing cycle, the charges for each VM instance. Your bill or invoice shows a section for all App Service costs. There's a separate line item for each meter.
+
+### Other costs that might accrue with Azure App Service
+
+Depending on which feature you use in App Service, the following cost-accruing resources may be created:
+
+- **Isolated tier** A [Virtual Network](../virtual-network/index.yml) is required for an App Service environment and is charged separately.
+- **Backup** A [Storage account](../storage/index.yml) is required to make backups and is charged separately.
+- **Diagnostic logs** You can select [Storage account](../storage/index.yml) as the logging option, or integrate with [Azure Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md). These services are charged separately.
+- **App Service certificates** Certificates you purchase in Azure must be maintained in [Azure Key Vault](../key-vault/index.yml), which is charged separately.
+ ### Costs that might accrue after resource deletion When you delete all apps in an App Service plan, the plan continues to accrue charges based on its configured pricing tier and number of instances. To avoid unwanted charges, delete the plan or scale it down to **Free** tier.
After you delete Azure App Service resources, resources from related Azure servi
- Log Analytic namespaces you created to ship diagnostic logs - [Instance or stamp reservations](#azure-reservations) for App Service that haven't expired yet
-### Using Monetary Credit with Azure App Service
+### Using Azure Prepayment with Azure App Service
-You can pay for Azure App Service charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services, including those from the Azure Marketplace.
+You can pay for Azure App Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services, including those from the Azure Marketplace.
## Estimate costs
app-service Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/samples-cli.md
The following table includes links to bash scripts built using the Azure CLI.
| [Create a scheduled backup for an app](./scripts/cli-backup-scheduled.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app and creates a scheduled backup for it. | | [Restores an app from a backup](./scripts/cli-backup-restore.md?toc=%2fcli%2fazure%2ftoc.json) | Restores an App Service app from a backup. | |**Monitor app**||
-| [Monitor an app with web server logs](./scripts/cli-monitor.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app, enables logging for it, and downloads the logs to your local machine. |
+| [Monitor an app with web server logs](./scripts/cli-monitor.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an App Service app, enables logging for it, and downloads the logs to your local machine. |
| | |
app-service Powershell Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/powershell-deploy-ftp.md
tags: azure-service-management
ms.assetid: b7d46d6f-44fd-454c-8008-87dab6eefbc1 Previously updated : 03/20/2017 Last updated : 06/23/2021 # Upload files to a web app using FTP
-This sample script creates a web app in App Service with its related resources, and then deploys your web app code using FTP (via [WebClient.UploadFile()](/dotnet/api/system.net.webclient.uploadfile)).
+This sample script creates a web app in App Service with its related resources, and then deploys a file to it using FTPS (via [System.Net.FtpWebRequest](/dotnet/api/system.net.ftpwebrequest)).
If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/), and then run `Connect-AzAccount` to create a connection with Azure.
automation Automation Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-send-email.md
You can send an email from a runbook with [SendGrid](https://sendgrid.com/soluti
## Create an Azure Key Vault
-You can create an Azure Key Vault using the following PowerShell script. Replace the variable values with values specific to your environment. Use the embedded Azure Cloud Shell via the **Try It** button, located in the top right corner of the code block. You can also copy and run the code locally if you have the [Az modules](/powershell/azure/install-az-ps) installed on your local machine.
+You can create an Azure Key Vault using the following PowerShell script. Replace the variable values with values specific to your environment. Use the embedded Azure Cloud Shell via the **Try It** button, located in the top-right corner of the code block. You can also copy and run the code locally if you have the [Az modules](/powershell/azure/install-az-ps) installed on your local machine. This script also creates a [Key Vault access policy](../key-vault/general/assign-access-policy-portal.md) that allows the Run As account to get and set key vault secrets in the specified key vault.
> [!NOTE] > To retrieve your API key, use the steps in [Find your SendGrid API key](../sendgrid-dotnet-how-to-send-email.md#to-find-your-sendgrid-api-key).
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/deploy-updates.md
To schedule a new update deployment, perform the following steps. Depending on t
> Deploying updates by update classification doesn't work on RTM versions of CentOS. To properly deploy updates for CentOS, select all classifications to make sure updates are applied. There's currently no supported method to enable native classification-data availability on CentOS. See the following for more information about [Update classifications](overview.md#update-classifications). >[!NOTE]
- > Deploying updates by update classification does may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment if your update schedules for Linux has the classification set as **Critical and security updates**.
+ > Deploying updates by update classification may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment if your update schedules for Linux has the classification set as **Critical and security updates**.
> > Update Management for Windows Server machines is unaffected; update classification and deployments are unchanged.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
When you schedule an update to run on a Linux machine, that for example is confi
Categorization is done for Linux updates as **Security** or **Others** based on the OVAL files, which includes updates addressing security issues or vulnerabilities. But when the update schedule is run, it executes on the Linux machine using the appropriate package manager like YUM, APT, or ZYPPER to install them. The package manager for the Linux distro may have a different mechanism to classify updates, where the results may differ from the ones obtained from OVAL files by Update Management. To manually check the machine and understand which updates are security relevant by your package manager, see [Troubleshoot Linux update deployment](../troubleshoot/update-management.md#updates-linux-installed-different). >[!NOTE]
-> Deploying updates by update classification does may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment if your update schedules for Linux has the classification set as **Critical and security updates**.
+> Deploying updates by update classification may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment if your update schedules for Linux has the classification set as **Critical and security updates**.
> > Update Management for Windows Server machines is unaffected; update classification and deployments are unchanged.
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-administration.md
Title: How to administer Azure Cache for Redis description: Learn how to perform administration tasks such as reboot and schedule updates for Azure Cache for Redis - Last updated 07/05/2017
Yes, for PowerShell instructions see [To reboot an Azure Cache for Redis](cache-
## Schedule updates
-On the left, **Schedule updates** allow you to choose a maintenance window for your cache instance. A maintenance window allows you to control the day(s) and time(s) of a week during which the VM(s) hosting your cache can be updated. Azure Cache for Redis will make a best effort to start and finish updating Redis server software within the specified time window you define.
+On the left, **Schedule updates** allows you to choose a maintenance window for your cache instance. A maintenance window allows you to control the day(s) and time(s) of a week during which the VM(s) hosting your cache can be updated. Azure Cache for Redis will make a best effort to start and finish updating Redis server software within the specified time window you define.
> [!NOTE] > The maintenance window applies to Redis server updates and updates to the Operating System of the VMs hosting the cache. The maintenance window does not apply to Host OS updates to the Hosts hosting the cache VMs or other Azure Networking components. In rare cases, where caches are hosted on older models (you can tell if your cache is on an older model if the DNS name of the cache resolves to a suffix of "cloudapp.net", "chinacloudapp.cn", "usgovcloudapi.net" or "cloudapi.de"), the maintenance window won't apply to Guest OS updates either.
On the left, **Schedule updates** allow you to choose a maintenance window for y
To specify a maintenance window, check the days you want and specify the maintenance window start hour for each day. Then, select **OK**. The maintenance window time is in UTC.
-The default, and minimum, maintenance window for updates is five hours. This value isn't configurable from the Azure portal, but you can configure it in PowerShell using the `MaintenanceWindow` parameter of the [New-AzRedisCacheScheduleEntry](/powershell/module/az.rediscache/new-azrediscachescheduleentry) cmdlet. For more information, see Can I manage scheduled updates using PowerShell, CLI, or other management tools?
+The default, and minimum, maintenance window for updates is five hours. This value isn't configurable from the Azure portal, but you can configure it in PowerShell using the `MaintenanceWindow` parameter of the [New-AzRedisCacheScheduleEntry](/powershell/module/az.rediscache/new-azrediscachescheduleentry) cmdlet. For more information, see [Can I manage scheduled updates using PowerShell, CLI, or other management tools?](#can-i-manage-scheduled-updates-using-powershell-cli-or-other-management-tools)
## Schedule updates FAQ
azure-cache-for-redis Cache Failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-failover.md
To build resilient and successful client applications, it's critical to understa
In this article, you find this information: -- What is a failover.
+- What is a failover?
- How failover occurs during patching. - How to build a resilient client application.
The number of errors seen by the client application depends on how many operatio
Most client libraries attempt to reconnect to the cache if they're configured to do so. However, unforeseen bugs can occasionally place the library objects into an unrecoverable state. If errors persist for longer than a preconfigured amount of time, the connection object should be recreated. In Microsoft.NET and other object-oriented languages, recreating the connection without restarting the application can be accomplished by using [a Lazy\<T\> pattern](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#reconnecting-with-lazyt-pattern).
-### How do I make my application resilient?
-
-Because you can't avoid failovers completely, write your client applications for resiliency to connection breaks and failed requests. Although most client libraries automatically reconnect to the cache endpoint, few of them attempt to retry failed requests. Depending on the application scenario, it might make sense to use retry logic with backoff.
-
-To test a client application's resiliency, use a [reboot](cache-administration.md#reboot) as a manual trigger for connection breaks. Additionally, we recommend that you [schedule updates](cache-administration.md#schedule-updates) on a cache. Tell the management service to apply Redis runtime patches during specified weekly windows. These windows are typically periods when client application traffic is low, to avoid potential incidents.
- ### Can I be notified in advance of a planned maintenance? Azure Cache for Redis now publishes notifications on a publish/subscribe channel called [AzureRedisEvents](https://github.com/Azure/AzureCacheForRedis/blob/main/AzureRedisEvents.md) around 30 seconds before planned updates. The notifications are runtime notifications. They're built especially for applications that can use circuit breakers to bypass the cache or buffer commands, for example, during planned updates. It's not a mechanism that can notify you days or hours in advance.
Certain client-side network-configuration changes can trigger "No connection ava
Such changes can cause a connectivity issue that lasts less than one minute. Your client application will probably lose its connection to other external network resources, but also to the Azure Cache for Redis service.
+## Build in resiliency
+
+You can't avoid failovers completely. Instead, write your client applications to be resilient to connection breaks and failed requests. Most client libraries automatically reconnect to the cache endpoint, but few of them attempt to retry failed requests. Depending on the application scenario, it might make sense to use retry logic with backoff.
+
+### How do I make my application resilient?
+
+Refer to these design patterns to build resilient clients, especially the circuit breaker and retry patterns:
+
+- [Reliability patterns - Cloud Design Patterns](/azure/architecture/framework/resiliency/reliability-patterns#resiliency)
+- [Retry guidance for Azure services - Best practices for cloud applications](/azure/architecture/best-practices/retry-service-specific)
+- [Implement retries with exponential backoff](/dotnet/architecture/microservices/implement-resilient-applications/implement-retries-exponential-backoff)
+
+To test a client application's resiliency, use a [reboot](cache-administration.md#reboot) as a manual trigger for connection breaks.
+
+Additionally, we recommend that you [schedule updates](cache-administration.md#schedule-updates) on a cache to apply Redis runtime patches during specific weekly windows. These windows are typically periods when client application traffic is low, to avoid potential incidents.
+ ## Next steps - [Schedule updates](cache-administration.md#schedule-updates) for your cache.
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python.md
Use the following commands to create these items. Both Azure CLI and PowerShell
The [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet signs you into your Azure account.
+
+1. When using the Azure CLI, you can turn on the `param-persist` option that automatically tracks the names of your created resources. To learn more, see [Azure CLI persisted parameter](/cli/azure/param-persist-howto).
+
+ # [Azure CLI](#tab/azure-cli)
+ ```azurecli
+ az config param-persist on
+ ```
+ # [Azure PowerShell](#tab/azure-powershell)
+
+ This feature isn't available in Azure PowerShell.
+
+
1. Create a resource group named `AzureFunctionsQuickstart-rg` in the `westeurope` region.
Use the following commands to create these items. Both Azure CLI and PowerShell
# [Azure CLI](#tab/azure-cli) ```azurecli
- az storage account create --name <STORAGE_NAME> --location westeurope --resource-group AzureFunctionsQuickstart-rg --sku Standard_LRS
+ az storage account create --name <STORAGE_NAME> --sku Standard_LRS
``` The [az storage account create](/cli/azure/storage/account#az_storage_account_create) command creates the storage account.
Use the following commands to create these items. Both Azure CLI and PowerShell
# [Azure CLI](#tab/azure-cli) ```azurecli
- az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime python --runtime-version 3.8 --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME> --os-type linux
+ az functionapp create --consumption-plan-location westeurope --runtime python --runtime-version 3.8 --functions-version 3 --name <APP_NAME> --os-type linux
``` The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure. If you are using Python 3.7 or 3.6, change `--runtime-version` to `3.7` or `3.6`, respectively.
Use the following commands to create these items. Both Azure CLI and PowerShell
- In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
+ In the previous example, replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
azure-functions Functions Bindings Mobile Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-mobile-apps.md
The following table explains the binding configuration properties that you set i
|**tableName** |**TableName**|Name of the mobile app's data table| | **id**| **Id** | The identifier of the record to retrieve. Can be static or based on the trigger that invokes the function. For example, if you use a queue trigger for your function, then `"id": "{queueTrigger}"` uses the string value of the queue message as the record ID to retrieve.| |**connection**|**Connection**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `http://<appname>.azurewebsites.net`.
-|**apiKey**|**ApiKey**|The name of an app setting that has your mobile app's API key. Provide the API key if you [implement an API key in your Node.js mobile app](https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/api-key), or [implement an API key in your .NET mobile app](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. |
+|**apiKey**|**ApiKey**|The name of an app setting that has your mobile app's API key. Provide the API key if you implement an API key in your Node.js mobile app, or [implement an API key in your .NET mobile app](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
The following table explains the binding configuration properties that you set i
| **name**| n/a | Name of output parameter in function signature.| |**tableName** |**TableName**|Name of the mobile app's data table| |**connection**|**MobileAppUriSetting**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `http://<appname>.azurewebsites.net`.
-|**apiKey**|**ApiKeySetting**|The name of an app setting that has your mobile app's API key. Provide the API key if you [implement an API key in your Node.js mobile app backend](https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/api-key), or [implement an API key in your .NET mobile app backend](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. |
+|**apiKey**|**ApiKeySetting**|The name of an app setting that has your mobile app's API key. Provide the API key if you implement an API key in your Node.js mobile app backend, or [implement an API key in your .NET mobile app backend](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-render-custom-data.md
To get a static image with custom pins and labels:
3. Enter a **Request name** for the request, such as *GET Static Image*. + 4. Select the **GET** HTTP method. + 5. Enter the following URL (replace `{subscription-key}` with your primary subscription key): ```HTTP
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-elevation-data.md
To request elevation data in raster tile format using the Postman app:
1. In the Postman app, select **New**.
-2. In the **Create New** window, select **Collection**.
+2. In the **Create New** window, select **HTTP Request**.
-3. To rename the collection, right click on your collection, and select **Rename**.
-
-4. Select **New** again.
-
-5. In the **Create New** window, select **Request**.
-
-6. Enter a **Request name** for the request.
+3. Enter a **Request name** for the request.
-7. Select the collection that you created, and then select **Save**.
+4. Select the collection that you created, and then select **Save**.
-8. On the **Builder** tab, select the **GET** HTTP method and then enter the following URL to request the raster tile.
+5. On the **Builder** tab, select the **GET** HTTP method and then enter the following URL to request the raster tile.
```http https://atlas.microsoft.com/map/tile?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=2.0&tilesetId=microsoft.dem&zoom=13&x=6074&y=3432
To request elevation data in raster tile format using the Postman app:
>[!Important] >For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
-9. Select the **Send** button.
+6. Select the **Send** button.
You should receive the raster tile that contains the elevation data in GeoTIFF format. Each pixel within the raster tile raw data is of type `float`. The value of each pixel represents the elevation height in meters.
To create the request:
1. In the Postman app, select **New** again.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request.
-4. Select the collection that you previously created, and then select **Save**.
-
-5. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://atlas.microsoft.com/elevation/point/json?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=1.0&points=-73.998672,40.714728|150.644,-34.397 ```
-6. Select the **Send** button. You'll receive the following JSON response:
+5. Select the **Send** button. You'll receive the following JSON response:
```json {
To create the request:
} ```
-7. Now, we'll call the [Post Data for Points API](/rest/api/maps/elevation/postdataforpoints) to get elevation data for the same two points. On the **Builder** tab, select the **POST** HTTP method and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+6. Now, we'll call the [Post Data for Points API](/rest/api/maps/elevation/postdataforpoints) to get elevation data for the same two points. On the **Builder** tab, select the **POST** HTTP method and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://atlas.microsoft.com/elevation/point/json?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=1.0 ```
-8. In the **Headers** field of the **POST** request, set `Content-Type` to `application/json`.
+7. In the **Headers** field of the **POST** request, set `Content-Type` to `application/json`.
-1. In the **Body** field, provide the following coordinate point information:
+8. In the **Body** field, provide the following coordinate point information:
```json [
To create the request:
1. In the Postman app, select **New**.
-2. In the **Create New** window, select **Request**.
-
-3. Enter a **Request name**, and then select a collection.
+2. In the **Create New** window, select **HTTP Request**.
-4. Select **Save**.
+3. Enter a **Request name**.
-5. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Azure-Maps-Primary-Subscription-key}&lines=-73.998672,40.714728|150.644,-34.397&samples=5 ```
-6. Select the **Send** button. You'll receive the following JSON response:
+5. Select the **Send** button. You'll receive the following JSON response:
```JSON {
To create the request:
} ```
-7. Now, we'll request three samples of elevation data along a path between coordinates at Mount Everest, Chamlang, and Jannu mountains. In the **Params** field, enter the following coordinate array for the value of the `lines` query key.
+6. Now, we'll request three samples of elevation data along a path between coordinates at Mount Everest, Chamlang, and Jannu mountains. In the **Params** field, enter the following coordinate array for the value of the `lines` query key.
```html 86.9797222, 27.775|86.9252778, 27.9880556 | 88.0444444, 27.6822222 ```
-8. Change the `samples` query key value to `3`. The image below shows the new values.
+7. Change the `samples` query key value to `3`. The image below shows the new values.
:::image type="content" source="./media/how-to-request-elevation-data/get-elevation-samples.png" alt-text="Retrieve three elevation data samples.":::
-9. Select **Send**. You'll receive the following JSON response:
+8. Select **Send**. You'll receive the following JSON response:
```json {
To create the request:
} ```
-10. Now, we'll call the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. On the **Builder** tab, select the **POST** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+9. Now, we'll call the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. On the **Builder** tab, select the **POST** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Azure-Maps-Primary-Subscription-key}&samples=5 ```
-11. In the **Headers** field of the **POST** request, set `Content-Type` to `application/json`.
+10. In the **Headers** field of the **POST** request, set `Content-Type` to `application/json`.
-1. In the **Body** field, provide the following coordinate point information.
+11. In the **Body** field, provide the following coordinate point information.
```json [
In this example, we'll specify rows=3 and columns=6. The response returns 18 ele
To create the request:
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **Request**.
+1. In the Postman app, select **New**.
-3. Enter a **Request name**, and then select a collection.
+2. In the **Create New** window, select **HTTP Request**.
-4. Select **Save**.
+3. Enter a **Request name**.
-5. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
```http https://atlas.microsoft.com/elevation/lattice/json?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=1.0&bounds=-121.66853362143818, 46.84646479863713,-121.65853362143818, 46.85646479863713&rows=2&columns=3 ```
-6. Select **Send**. The response returns 18 elevation data samples, one for each vertex of the grid.
+5. Select **Send**. The response returns 18 elevation data samples, one for each vertex of the grid.
```json {
azure-maps How To Request Real Time Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-real-time-data.md
Title: Request real-time public transit data with Microsoft Azure Maps Mobility
description: Learn how to request real-time public transit data, such as arrivals at a transit stop. See how to use the Azure Maps Mobility services (Preview) for this purpose. Previously updated : 12/07/2020 Last updated : 06/22/2021
In order to request real-time arrivals data of a particular public transit stop,
Let's use "522" as our metro ID, which is the metro ID for the "SeattleΓÇôTacomaΓÇôBellevue, WA" area. Use "5222060603" as the stop ID, this bus stop is at "Ne 24th St & 162nd Ave Ne, Bellevue WA". To request the next five real-time arrivals data, for all next live arrivals at this stop, complete the following steps:
-1. Open the Postman app, and let's create a collection to store the requests. Near the top of the Postman app, select **New**. In the **Create New** window, select **Collection**. Name the collection and select the **Create** button.
+1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. To create the request, select **New** again. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous step, as the location in which to save the request. Then, select **Save**.
-
- ![Create a request in Postman](./media/how-to-request-transit-data/postman-new.png)
-
-3. Select the **GET** HTTP method on the builder tab and enter the following URL to create a GET request. Replace `{subscription-key}`, with your Azure Maps primary key.
+2. Select the **GET** HTTP method on the builder tab and enter the following URL to create a GET request. Replace `{subscription-key}`, with your Azure Maps primary key.
```HTTP https://atlas.microsoft.com/mobility/realtime/arrivals/json?subscription-key={subscription-key}&api-version=1.0&metroId=522&query=5222060603&transitType=bus ```
-4. After a successful request, you'll receive the following response. Notice that parameter 'scheduleType' defines whether the estimated arrival time is based on real-time or static data.
+3. After a successful request, you'll receive the following response. Notice that parameter 'scheduleType' defines whether the estimated arrival time is based on real-time or static data.
```JSON {
azure-maps How To Request Transit Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-transit-data.md
In order to request detail information about transit agencies and supported tran
Let's make a request to get the Metro Area for the Seattle-Tacoma metro area ID. To request ID for a metro area, complete the following steps:
-1. Open the Postman app, and let's create a collection to store the requests. Near the top of the Postman app, select **New**. In the **Create New** window, select **Collection**. Name the collection and select the **Create** button.
-
-2. To create the request, select **New** again. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous step as the location in which to save the request. Then, select **Save**.
+1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
- ![Create a request in Postman](./media/how-to-request-transit-data/postman-new.png)
-
-3. Select the **GET** HTTP method on the builder tab and enter the following URL to create a GET request. Replace `{subscription-key}`, with your Azure Maps primary key.
+2. Select the **GET** HTTP method on the builder tab and enter the following URL to create a GET request. Replace `{subscription-key}`, with your Azure Maps primary key.
```HTTP https://atlas.microsoft.com/mobility/metroArea/id/json?subscription-key={subscription-key}&api-version=1.0&query=47.63096,-122.126 ```
-4. After a successful request, you'll receive the following response:
+3. After a successful request, you'll receive the following response:
```JSON {
The Azure Maps [Get Nearby Transit](/rest/api/maps/mobility/getnearbytransitprev
To make a request to the [Get Nearby Transit](/rest/api/maps/mobility/getnearbytransitpreview), follow the steps below:
-1. In Postman, click **New Request** | **GET request** and name it **Get Nearby stops**.
+1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. On the Builder tab, select the **GET** HTTP method, enter the following request URL for your API endpoint and click **Send**.
To obtain the location coordinates of the Space Needle tower, we'll use the Azur
To make a request to the Fuzzy search service, follow the steps below:
-1. In Postman, click **New Request** | **GET request** and name it **Get location coordinates**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. On the Builder tab, select the **GET** HTTP method, enter the following request URL, and click **Send**.
To make a request to the Fuzzy search service, follow the steps below:
To make a route request, complete the steps below:
-1. In Postman, click **New Request** | **GET request** and name it **Get Route info**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. On the Builder tab, select the **GET** HTTP method, enter the following request URL for your API endpoint and click **Send**.
To make a route request, complete the steps below:
The Azure Maps [Get Transit Itinerary](/rest/api/maps/mobility/gettransititinerarypreview) service allows you to request data for a particular route using the route's **itinerary ID** returned by the [Get Transit Routes API](/rest/api/maps/mobility/gettransitroutepreview) service. To make a request, complete the steps below:
-1. In Postman, click **New Request** | **GET request** and name it **Get Transit info**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. On the Builder tab, select the **GET** HTTP method. Enter the following request URL for your API endpoint and click **Send**.
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-weather-data.md
The [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions) re
In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weather/getcurrentconditions) to retrieve current weather conditions at coordinates located in Seattle, WA.
-1. Open the Postman app. Near the top of the Postman app, select **New**. In the **Create New** window, select **Collection**. Name the collection and select the **Create** button. You'll use this collection for the rest of the examples in this document.
+1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. To create the request, select **New** again. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous step, and then select **Save**.
-
-3. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
```http https://atlas.microsoft.com/weather/currentConditions/json?api-version=1.0&query=47.60357,-122.32945&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-4. Click the blue **Send** button. The response body contains current weather information.
+3. Click the blue **Send** button. The response body contains current weather information.
```json {
In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/w
>[!NOTE] >This example retrieves severe weather alerts at the time of this writing. It is likely that there are no longer any severe weather alerts at the requested location. To retrieve actual severe alert data when running this example, you'll need to retrieve data at a different coordinate location.
-1. Open the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous section or created a new one, and then select **Save**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
The [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecast) returns de
In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/getdailyforecast) to retrieve the five-day weather forecast for coordinates located in Seattle, WA.
-1. Open the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous section or created a new one, and then select **Save**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
The [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecast) returns
In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather/gethourlyforecast) to retrieve the hourly weather forecast for the next 12 hours at coordinates located in Seattle, WA.
-1. Open the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous section or created a new one, and then select **Save**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather
] } ```+ ## Request minute-by-minute weather forecast data The [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast) returns minute-by-minute forecasts for a given location for the next 120 minutes. Users can request weather forecasts in intervals of 1, 5 and 15 minutes. The response includes details such as the type of precipitation (including rain, snow, or a mixture of both), start time, and precipitation intensity value (dBZ). In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather/getminuteforecast) to retrieve the minute-by-minute weather forecast at coordinates located in Seattle, WA. The weather forecast is given for the next 120 minutes. Our query requests that the forecast be given at 15-minute intervals, but you can adjust the parameter to be either 1 or 5 minutes.
-1. Open the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous section or created a new one, and then select **Save**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-search-for-address.md
In this example, we'll use the Azure Maps [Get Search Address API](/rest/api/map
>[!TIP] >If you have a set of addresses to geocode, you can use the [Post Search Address Batch API](/rest/api/maps/search/postsearchaddressbatch) to send a batch of queries in a single API call.
-1. Open the Postman app. Near the top of the Postman app, select **New**. In the **Create New** window, select **Collection**. Name the collection and select the **Create** button. You'll use this collection for the rest of the examples in this document.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. To create the request, select **New** again. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous step, and then select **Save**.
-
-3. Select the **GET** HTTP method in the builder tab and enter the following URL. In this request, we're searching for a specific address: `400 Braod St, Seattle, WA 98109`. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. In this request, we're searching for a specific address: `400 Braod St, Seattle, WA 98109`. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
```http https://atlas.microsoft.com/search/address/json?&subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=1.0&language=en-US&query=400 Broad St, Seattle, WA 98109 ```
-4. Click the blue **Send** button. The response body will contain data for a single location.
+3. Click the blue **Send** button. The response body will contain data for a single location.
-5. Now, we'll search an address that has more than one possible locations. In the **Params** section, change the `query` key to `400 Broad, Seattle`. Click the blue **Send** button.
+4. Now, we'll search an address that has more than one possible locations. In the **Params** section, change the `query` key to `400 Broad, Seattle`. Click the blue **Send** button.
:::image type="content" source="./media/how-to-search-for-address/search-address.png" alt-text="Search for address":::
-6. Next, try setting the `query` key to `400 Broa`.
+5. Next, try setting the `query` key to `400 Broa`.
-7. Click the **Send** button. You can now see that the response includes responses from multiple countries. To geobias results to the relevant area for your users, always add as many location details as possible to the request.
+6. Click the **Send** button. You can now see that the response includes responses from multiple countries. To geobias results to the relevant area for your users, always add as many location details as possible to the request.
## Using Fuzzy Search API
In this example, we'll use Fuzzy Search to search the entire world for `pizza`.
>[!IMPORTANT] >To geobias results to the relevant area for your users, always add as many location details as possible. To learn more, see [Best Practices for Search](how-to-use-best-practices-for-search.md#geobiased-search-results).
-1. Open the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous section or created a new one, and then select **Save**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
The Azure Maps [Get Search Address Reverse API](/rest/api/maps/search/getsearcha
In this example, we'll be making reverse searches using a few of the optional parameters that are available. For the full list of optional parameters, see [Reverse Search Parameters](/rest/api/maps/search/getsearchaddressreverse#uri-parameters).
-1. In the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the first section or created a new one, and then select **Save**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. The request should look like the following URL:
In this example, we'll be making reverse searches using a few of the optional pa
In this example, we'll search for a cross street based on the coordinates of an address.
-1. In the Postman app, click **New**, and select **Request**. Enter a **Request name** for the request. Select the collection you created in the first section or created a new one, and then select **Save**.
+1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. The request should look like the following URL:
azure-maps How To Secure Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-daemon-app.md
We'll use the [Postman](https://www.postman.com/) application to create the toke
1. In the Postman app, select **New**.
-2. In the **Create New** window, select **Collection**.
+2. In the **Create New** window, select **HTTP Request**.
-3. Select **New** again.
+3. Enter a **Request name** for the request, such as *POST Token Request*.
-4. In the **Create New** window, select **Request**.
+4. Select the **POST** HTTP method.
-5. Enter a **Request name** for the request, such as *POST Token Request*.
-
-6. Select the collection you previously created, and then select **Save**.
-
-7. Select the **POST** HTTP method.
-
-8. Enter the following URL to address bar (replace `<Tenant ID>` with the Directory (Tenant) ID, the `<Client ID>` with the Application (Client) ID), and `<Client Secret>` with your client secret:
+5. Enter the following URL to address bar (replace `<Tenant ID>` with the Directory (Tenant) ID, the `<Client ID>` with the Application (Client) ID), and `<Client Secret>` with your client secret:
```http https://login.microsoftonline.com/<Tenant ID>/oauth2/v2.0/token?response_type=token&grant_type=client_credentials&client_id=<Client ID>&client_secret=<Client Secret>%3D&scope=api%3A%2F%2Fazmaps.fundamentals%2F.default ```
-9. Select **Send**
+6. Select **Send**
-10. You should see the following JSON response:
+7. You should see the following JSON response:
```json {
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/indoor-map-dynamic-styling.md
In the next section, we'll set the occupancy *state* of office `UNIT26` to `true
1. In the Postman app, select **New**.
-2. In the **Create New** window, select **Collection**.
+2. In the **Create New** window, select **HTTP Request**.
-3. Select **New** again.
+3. Enter a **Request name** for the request, such as *POST Data Upload*.
-4. In the **Create New** window, select **Request**.
-
-5. Enter a **Request name** for the request, such as *POST Data Upload*.
-
-6. Select the collection you previously created, and then select **Save**.
-
-7. Enter the following URL to the [Feature Update States API](/rest/api/maps/v2/feature-state/update-states) (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `statesetId` with the `statesetId`):
+4. Enter the following URL to the [Feature Update States API](/rest/api/maps/v2/feature-state/update-states) (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `statesetId` with the `statesetId`):
```http https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-8. Select the **Headers** tab.
+5. Select the **Headers** tab.
-9. In the **KEY** field, select `Content-Type`. In the **VALUE** field, select `application/json`.
+6. In the **KEY** field, select `Content-Type`. In the **VALUE** field, select `application/json`.
:::image type="content" source="./media/indoor-map-dynamic-styling/stateset-header.png"alt-text="Header tab information for stateset creation.":::
-10. Select the **Body** tab.
+7. Select the **Body** tab.
-11. In the dropdown lists, select **raw** and **JSON**.
+8. In the dropdown lists, select **raw** and **JSON**.
-12. Copy the following JSON style, and then paste it in the **Body** window:
+9. Copy the following JSON style, and then paste it in the **Body** window:
```json {
In the next section, we'll set the occupancy *state* of office `UNIT26` to `true
>[!IMPORTANT] >The update will be saved only if the posted time stamp is after the time stamp used in previous feature state update requests for the same feature `ID`.
-13. Change the URL you used in step 7 by replacing `UNIT26` with `UNIT27`:
+10. Change the URL you used in step 7 by replacing `UNIT26` with `UNIT27`:
```http https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT27?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-14. Copy the following JSON style, and then paste it in the **Body** window:
+11. Copy the following JSON style, and then paste it in the **Body** window:
``` json {
azure-maps Mobility Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/mobility-coverage.md
# Azure Maps Mobility services (Preview) coverage > [!IMPORTANT]
-> Azure Maps Mobility services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
+> The Azure Maps Mobility Services Preview has been retired and will no longer be available and supported after October 5, 2021. All other Azure Maps APIs and Services are unaffected by this retirement announcement.
+> For details, see [Azure Maps Mobility Preview Retirement](https://azure.microsoft.com/updates/azure-maps-mobility-services-preview-retirement/).
The Azure Maps [Mobility services](/rest/api/maps/mobility) improves the development time for applications with public transit features, such as transit routing and search for nearby public transit stops. Users can retrieve detailed information about transit stops, lines, and schedules. The Mobility services also allow users to retrieve stop and line geometries, alerts for stops, lines, and service areas, and real-time public transit arrivals and service alerts. Additionally, the Mobility services provide routing capabilities with multimodal trip planning options. Multimodal trip planning incorporates walking, bicycling, and public transit options, all into one trip. Users can also access detailed multimodal step-by-step itineraries.
azure-maps Mobility Service Data Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/mobility-service-data-structure.md
# Data structures in Azure Maps Mobility services (Preview) > [!IMPORTANT]
-> Azure Maps Mobility services are currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
--
+> The Azure Maps Mobility Services Preview has been retired and will no longer be available and supported after October 5, 2021. All other Azure Maps APIs and Services are unaffected by this retirement announcement.
+> For details, see [Azure Maps Mobility Preview Retirement](https://azure.microsoft.com/updates/azure-maps-mobility-services-preview-retirement/).
This article introduces the concept of Metro Area in [Azure Maps Mobility services](/rest/api/maps/mobility). We discuss some of common fields that are returned when this service is queried for public transit stops and lines. We recommend reading this article before developing with the Mobility services APIs.
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-creator-indoor-maps.md
To upload the Drawing package:
1. In the Postman app, select **New**.
-2. In the **Create New** window, select **Collection**.
+2. In the **Create New** window, select **HTTP Request**.
-3. Select **New** again.
+3. Enter a **Request name** for the request, such as *POST Data Upload*.
-4. In the **Create New** window, select **Request**.
+4. Select the **POST** HTTP method.
-5. Enter a **Request name** for the request, such as *POST Data Upload*.
-
-6. Select the collection you previously created, and then select **Save**.
-
-7. Select the **POST** HTTP method.
-
-8. Enter the following URL to the [Data Upload API](/rest/api/maps/data-v2/upload-preview):
+5. Enter the following URL to the [Data Upload API](/rest/api/maps/data-v2/upload-preview):
```http https://us.atlas.microsoft.com/mapData?api-version=2.0&dataFormat=dwgzippackage&subscription-key={Azure-Maps-Primary-Subscription-key}
To upload the Drawing package:
>[!Important] >For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
-9. Select the **Headers** tab.
+6. Select the **Headers** tab.
-10. In the **KEY** field, select `Content-Type`.
+7. In the **KEY** field, select `Content-Type`.
-11. In the **VALUE** field, select `application/octet-stream`.
+8. In the **VALUE** field, select `application/octet-stream`.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-header.png"alt-text="Header tab information for data upload.":::
-12. Select the **Body** tab.
+9. Select the **Body** tab.
-13. In the dropdown list, select **binary**.
+10. In the dropdown list, select **binary**.
-14. Select **Select File**, and then select a Drawing package.
+11. Select **Select File**, and then select a Drawing package.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="Select a Drawing package.":::
-15. Select **Send**.
+12. Select **Send**.
-16. In the response window, select the **Headers** tab.
+13. In the response window, select the **Headers** tab.
-17. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the Drawing package upload.
+14. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the Drawing package upload.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-response-header.png" alt-text="Copy the status URL in the Location key.":::
To check the status of the drawing package and retrieve its unique ID (`udid`):
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
To retrieve content metadata:
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
To convert a Drawing package:
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *POST Convert Drawing Package*.
To check the status of the conversion process and retrieve the `conversionId`:
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *GET Conversion Status*.
To create a dataset:
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *POST Dataset Create*.
To check the status of the dataset creation process and retrieve the `datasetId`
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *GET Dataset Status*.
To create a tileset:
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *POST Tileset Create*.
To check the status of the dataset creation process and retrieve the `tilesetId`
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *GET Tileset Status*.
To query the all collections in your dataset:
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *GET Dataset Collections*.
To query the unit collection in your dataset:
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *GET Unit Collection*.
To create a stateset:
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *POST Create Stateset*.
To update the `occupied` state of the unit with feature `id` "UNIT26":
1. Select **New**.
-2. In the **Create New** window, select **Request**.
+2. In the **Create New** window, select **HTTP Request**.
3. Enter a **Request name** for the request, such as *PUT Set Stateset*.
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-geofence.md
For this tutorial, you upload geofencing GeoJSON data that contains a `FeatureCo
>[!TIP] >You can update your geofencing data at any time. For more information, see [Data Upload API](/rest/api/maps/data-v2/upload-preview).
-1. Open the Postman app. Near the top, select **New**. In the **Create New** window, select **Collection**. Name the collection and select **Create**.
+1. Open the Postman app. Near the top, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. To create the request, select **New** again. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Select the collection you created in the previous step, and then select **Save**.
-
-3. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofencing data to Azure Maps. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofencing data to Azure Maps. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=2.0&dataFormat=geojson
For this tutorial, you upload geofencing GeoJSON data that contains a `FeatureCo
The `geojson` parameter in the URL path represents the data format of the data being uploaded.
-4. Select the **Body** tab. Select **raw**, and then **JSON** as the input format. Copy and paste the following GeoJSON data into the **Body** text area:
+3. Select the **Body** tab. Select **raw**, and then **JSON** as the input format. Copy and paste the following GeoJSON data into the **Body** text area:
```JSON {
For this tutorial, you upload geofencing GeoJSON data that contains a `FeatureCo
} ```
-5. Select **Send**, and wait for the request to process. When the request completes, go to the **Headers** tab of the response. Copy the value of the **Operation-Location** key, which is the `status URL`.
+4. Select **Send**, and wait for the request to process. When the request completes, go to the **Headers** tab of the response. Copy the value of the **Operation-Location** key, which is the `status URL`.
```http https://us.atlas.microsoft.com/mapData/operations/<operationId>?api-version=2.0 ```
-6. To check the status of the API call, create a **GET** HTTP request on the `status URL`. You'll need to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
+5. To check the status of the API call, create a **GET** HTTP request on the `status URL`. You'll need to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
```HTTP https://us.atlas.microsoft.com/mapData/<operationId>?api-version=2.0&subscription-key={Subscription-key} ```
-7. When the request completes successfully, select the **Headers** tab in the response window. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the uploaded data. Save the `udid` to query the Get Geofence API in the last section of this tutorial. Optionally, you can use the `resource location URL` to retrieve metadata from this resource in the next step.
+6. When the request completes successfully, select the **Headers** tab in the response window. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the uploaded data. Save the `udid` to query the Get Geofence API in the last section of this tutorial. Optionally, you can use the `resource location URL` to retrieve metadata from this resource in the next step.
:::image type="content" source="./media/tutorial-geofence/resource-location-url.png" alt-text="Copy the resource location URL.":::
-8. To retrieve content metadata, create a **GET** HTTP request on the `resource location URL` that was retrieved in step 7. Make sure to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
+7. To retrieve content metadata, create a **GET** HTTP request on the `resource location URL` that was retrieved in step 7. Make sure to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Azure-Maps-Primary-Subscription-key} ```
-9. When the request completes successfully, select the **Headers** tab in the response window. The metadata should like the following JSON fragment:
+8. When the request completes successfully, select the **Headers** tab in the response window. The metadata should like the following JSON fragment:
```json {
Each of the following sections makes API requests by using the five different lo
### Equipment location 1 (47.638237,-122.132483)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Make it *Location 1*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 1*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
In the preceding GeoJSON response, the negative distance from the main site geof
### Location 2 (47.63800,-122.132531)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Make it *Location 2*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 2*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
In the preceding GeoJSON response, the equipment has remained in the main site g
### Location 3 (47.63810783315048,-122.13336020708084)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Make it *Location 3*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 3*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
In the preceding GeoJSON response, the equipment has remained in the main site g
### Location 4 (47.637988,-122.1338344)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Make it *Location 4*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 4*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
In the preceding GeoJSON response, the equipment has remained in the main site g
### Location 5 (47.63799, -122.134505)
-1. Near the top of the Postman app, select **New**. In the **Create New** window, select **Request**. Enter a **Request name** for the request. Make it *Location 5*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
+1. Near the top of the Postman app, select **New**. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. Make it *Location 5*. Select the collection you created in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data), and then select **Save**.
2. Select the **GET** HTTP method in the builder tab, and enter the following URL. Make sure to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data).
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-iot-hub-maps.md
Next, use the [Postman app](https://www.getpostman.com) to [upload the geofence]
Follow these steps to upload the geofence by using the Azure Maps Data Upload API:
-1. Open the Postman app, and select **New**. In the **Create New** window, select **Collection**. Name the collection and select **Create**.
+1. Open the Postman app, select **New** again. In the **Create New** window, select **HTTP Request**, and enter a request name for the request.
-2. To create the request, select **New** again. In the **Create New** window, select **Request**, and enter a request name for the request. Select the collection you created in the previous step, and then select **Save**.
-
-3. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API. Make sure to replace `{subscription-key}` with your primary subscription key.
+2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API. Make sure to replace `{subscription-key}` with your primary subscription key.
```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={subscription-key}&api-version=2.0&dataFormat=geojson
Follow these steps to upload the geofence by using the Azure Maps Data Upload AP
In the URL path, the `geojson` value against the `dataFormat` parameter represents the format of the data being uploaded.
-4. Select **Body** > **raw** for the input format, and choose **JSON** from the drop-down list. [Open the JSON data file](https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4), and copy the JSON into the body section. Select **Send**.
+3. Select **Body** > **raw** for the input format, and choose **JSON** from the drop-down list. [Open the JSON data file](https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4), and copy the JSON into the body section. Select **Send**.
-5. Select **Send** and wait for the request to process. After the request completes, go to the **Headers** tab of the response. Copy the value of the **Operation-Location** key, which is the `status URL`.
+4. Select **Send** and wait for the request to process. After the request completes, go to the **Headers** tab of the response. Copy the value of the **Operation-Location** key, which is the `status URL`.
```http https://us.atlas.microsoft.com/mapData/operations/<operationId>?api-version=2.0 ```
-6. To check the status of the API call, create a **GET** HTTP request on the `status URL`. You'll need to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
+5. To check the status of the API call, create a **GET** HTTP request on the `status URL`. You'll need to append your primary subscription key to the URL for authentication. The **GET** request should like the following URL:
```HTTP https://us.atlas.microsoft.com/mapData/<operationId>/status?api-version=2.0&subscription-key={subscription-key} ```
-7. When the request completes successfully, select the **Headers** tab in the response window. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the uploaded data. Copy the `udid` for later use in this tutorial.
+6. When the request completes successfully, select the **Headers** tab in the response window. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the uploaded data. Copy the `udid` for later use in this tutorial.
:::image type="content" source="./media/tutorial-iot-hub-maps/resource-location-url.png" alt-text="Copy the resource location URL.":::
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
Azure Monitor agent coexists with the [generally available agents for Azure Moni
- **Environment requirements.** Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today latest operating systems and future environment support such as new operating system versions and types of networking requirements will most likely be provided only in this new agent. You should assess whether your environment is supported by Azure Monitor agent. If not, then you may need to stay with the current agent. If Azure Monitor agent supports your current environment, then you should consider transitioning to it. - **Current and new feature requirements.** Azure Monitor agent introduces several new capabilities such as filtering, scoping, and multi-homing, but it isnΓÇÖt at parity yet with the current agents for other functionality such as custom log collection and integration with all solutions ([see solutions in preview](/azure/azure-monitor/faq#which-log-analytics-solutions-are-supported-on-the-new-azure-monitor-agent)). Most new capabilities in Azure Monitor will only be made available with Azure Monitor agent, so over time more functionality will only be available in the new agent. You should consider whether Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent. If Azure Monitor agent has all the core capabilities you require then consider transitioning to it. If there are critical features that you require then continue with the current agent until Azure Monitor agent reaches parity.-- **Tolerance for rework.** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, asses the effort involved. If it will take a significant amount of work, then consider setting up your new environment with the new agent as it is now generally available. A deprecation date published for the Log Analytics agents in August, 2021. The current agents will be supported for several years once deprecation begins.
+- **Tolerance for rework.** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If it will take a significant amount of work, then consider setting up your new environment with the new agent as it is now generally available. A deprecation date published for the Log Analytics agents in August, 2021. The current agents will be supported for several years once deprecation begins.
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-trace-logs.md
For example:
```csharp TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault(); var telemetryClient = new TelemetryClient(configuration);
-telemetry.TrackTrace("Slow response - database01");
+telemetryClient.TrackTrace("Slow response - database01");
``` An advantage of TrackTrace is that you can put relatively long data in the message. For example, you can encode POST data there.
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-ad-authentication.md
tracer = Tracer(
After the Azure AD authentication is enabled, you can choose to disable local authentication. This will allow you to ingest telemetry authenticated exclusively by Azure AD and impacts data access (for example, through API Keys).
-You can disable local authentication by using the Azure portal or programmatically.
+You can disable local authentication by using the Azure portal, Azure policy, or programmatically.
### Azure portal
You can disable local authentication by using the Azure portal or programmatical
:::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot of overview tab with the disabled(click to change) highlighted.":::
+### Azure policy
+
+Azure policy for ΓÇÿDisableLocalAuthΓÇÖ will deny from users to create a new Application Insights resource without this property setting to ΓÇÿtrueΓÇÖ. The policy name is ΓÇÿApplication Insights components should block non-AAD auth ingestionΓÇÖ.
+
+To apply this policy to your subscription, [create a new policy assignment and assign the policy](../..//governance/policy/assign-policy-portal.md).
+
+Below is the policy template definition:
+```JSON
+{
+ "properties": {
+ "displayName": "Application Insights components should block non-AAD auth ingestion",
+ "policyType": "BuiltIn",
+ "mode": "Indexed",
+ "description": "Improve Application Insights security by disabling log ingestion that are not AAD-based.",
+ "metadata": {
+ "version": "1.0.0",
+ "category": "Monitoring"
+ },
+ "parameters": {
+ "effect": {
+ "type": "String",
+ "metadata": {
+ "displayName": "Effect",
+ "description": "The effect determines what happens when the policy rule is evaluated to match"
+ },
+ "allowedValues": [
+ "audit",
+ "deny",
+ "disabled"
+ ],
+ "defaultValue": "audit"
+ }
+ },
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.Insights/components"
+ },
+ {
+ "field": "Microsoft.Insights/components/DisableLocalAuth",
+ "notEquals": "true"
+ }
+ ]
+ },
+ "then": {
+ "effect": "[parameters('effect')]"
+ }
+ }
+ }
+}
+```
+ ### Programmatic enablement Property `DisableLocalAuth` is used to disable any local authentication on your Application Insights resource. When set to `true`, this property enforces that Azure AD authentication must be used for all access.
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-custom-overview.md
Previously updated : 04/13/2021 Last updated : 06/01/2021 # Custom metrics in Azure Monitor (Preview)
Again, this limit is not for an individual metric. ItΓÇÖs for the sum of all suc
If you have a variable in the name or a high cardinality dimension, the following can occur: - Metrics become unreliable due to throttling-- Metrics Explorer doesnΓÇÖt work
+- Metrics Explorer wonΓÇÖt work
- Alerting and notifications become unpredictable-- Costs can increase unexpectedly - Microsoft is not charging while the custom metrics with dimensions are in public preview. However, once charges start in the future, you will incur unexpected charges. The plan is to charge for metrics consumption based on the number of time-series monitored and number of API calls made.
+- Costs can increase unexpectedly - Microsoft is not charging for custom metrics with dimensions while this feature is in Public Preview. However, once charges start in the future, you will incur unexpected charges. The plan is to charge for metrics consumption based on the number of time-series monitored and number of API calls made.
## Next steps Use custom metrics from different
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
+
+ Title: Monitor virtual machines with Azure Monitor - Alerts
+description: Describes how to create alerts from virtual machines and their guest workloads using Azure Monitor.
++++ Last updated : 06/21/2021+++
+# Monitoring virtual machines with Azure Monitor - Alerts
+This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It provides guidance on creating alert rules for your virtual machines and their guest operating systems. [Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. There are no preconfigured alert rules for virtual machines, but you can create your own based on data collected by VM insights.
+
+> [!NOTE]
+> The alerts described in this article do not include alerts created by [Azure Monitor for VM guest health](vminsights-health-overview.md) which is a feature currently in public preview. As this feature nears general availability, guidance for alerting will be consolidated.
+
+> [!IMPORTANT]
+> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you create any alert rules.
+++
+## Choosing the alert type
+The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md).
+The type of alert rule that you create for a particular scenario will depend on where the data is located that you're alerting on. You may have cases though where data for a particular alerting scenario is available in both Metrics and Logs, and you need to determine which rule type to use. You may also have flexibility in how you collect certain data and let your decision of alert rule type drive your decision for data collection method.
+
+It's typically the best strategy to use metric alerts instead of log alerts when possible since they're more responsive and stateful. This of course requires that the data you're alerting on is available in Metrics. VM insights currently sends all of its data to Logs, so you must install Azure Monitor agent to use metric alerts with data from the guest operating system. Use Log query alerts with metric data when its either not available in Metrics or you require additional logic beyond the relatively simple logic for a metric alert rule.
+
+### Metric alert rules
+[Metric alert rules](../alerts/alerts-metric.md) are useful for alerting when a particular metric exceeds a threshold. For example, when the CPU of a machine is running high. The target of a metric alert rule can be a specific machine, a resource group, or a subscription. This allows you to create a single rule that applies to a group of machines.
+
+Metric rules for virtual machines can use the following data:
+
+- Host metrics for Azure virtual machines which are collected automatically.
+- Metrics that are collected by Azure Monitor agent from the quest operating system.
++
+> [!NOTE]
+> When VM insights supports the Azure Monitor Agent which is currently in public preview, then it will send performance data from the guest operating system to Metrics so that you can use metric alerts.
+++
+### Log alerts
+[Log alerts](../alerts/alerts-metric.md) can perform two different measurements of the result of a log query, each of which support distinct scenarios for monitoring virtual machines.
+
+- [Metric measurement](../alerts/alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) create a separate alert for each record in the query results that has a numeric value that exceeds a threshold defined in the alert rule. These are ideal for non-numeric data such and Windows and Syslog events collected by the Log Analytics agent or for analyzing performance trends across multiple computers.
+- [Number of results](../alerts/alerts-unified-log.md#count-of-the-results-table-rows) create a single alert when a query returns at least a specified number of records. These are ideal for non-numeric data such and Windows and Syslog events collected by the [Log Analytics agent](../agents/log-analytics-agent.md) or for analyzing performance trends across multiple computers. You may also choose this strategy if you want to minimize your number of alerts or possibly create an alert only when multiple machines have the same error condition.
+### Target resource and impacted resource
+
+> [!NOTE]
+> Resource-centric log alert rules, currently in public preview, will simplify log query alerts for virtual machines and replace the functionality currently provided by metric measurement queries. You can use the machine as a target for the rule which will better identify it as the affected resource. You can also apply a single alert rule to all machines in a particular resource group or description. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
+>
+Each alert in Azure Monitor has an **Affected resource** property which is defined by the target of the rule. For metric alert rules, the affected resource will be the computer which allows you to easily identify it in the standard alert view. Log query alerts will be associated with the workspace resource instead of the machine, even when you use a metric measurement alert that creates an alert for each computer. You need to view the details of the alert to view the computer that was affected.
+
+The computer name is stored in the **Impacted resource** property which you can view in the details of the alert. It's also displayed as a dimension in emails that are sent from the alert.
+++
+You may want to have a view that lists the alerts with the affected computer. You can do this with a custom workbook that uses a custom [Resource Graph](../../governance/resource-graph/overview.md) to provide this view. Following is a query that can be used to display alerts. Use the data source **Azure Resource Graph** in the workbook.
+
+```kusto
+alertsmanagementresources
+| extend dimension = properties.context.context.condition.allOf
+| mv-expand dimension
+| extend dimension = dimension.dimensions
+| mv-expand dimension
+| extend Computer = dimension.value
+| extend AlertStatus = properties.essentials.alertState
+| summarize count() by Alert=name, tostring(AlertStatus), tostring(Computer)
+| project Alert, AlertStatus, Computer
+```
+## Common alert rules
+The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log metric measurement alerts are provided for each. See [Choosing the alert type](#choosing-the-alert-type) section above for guidance on which type of alert to use.
+
+If you're not familiar with the process for creating alert rules in Azure Monitor, see the following for guidance:
+
+- [Create, view, and manage metric alerts using Azure Monitor](../alerts/alerts-metric.md)
+- [Create, view, and manage log alerts using Azure Monitor](../alerts/alerts-log.md)
++
+### Machine unavailable
+The most basic requirement is to send an alert when a machine is unavailable. It could be stopped, the guest operating system could be hung, or the agent could be unresponsive. There are a variety of ways to configure this alerting, but the most common is to use the heartbeat sent from the Log Analytics agent.
+
+#### Log query alert rules
+Log query alerts use the [Heartbeat table ](/azure/azure-monitor/reference/tables/heartbeat) which should have a heartbeat record every minute from each machine.
+
+**Separate alerts**
+Use a metric measurement rule with the following query.
+
+```kusto
+Heartbeat
+| summarize TimeGenerated=max(TimeGenerated) by Computer
+| extend Duration = datetime_diff('minute',now(),TimeGenerated)
+| summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,1) |
+```
+
+**Single alert**
+Use a number of results alert with the following query.
+
+```kusto
+Heartbeat
+| summarize LastHeartbeat=max(TimeGenerated) by Computer
+| where LastHeartbeat < ago(5m)
+```
++
+#### Metric alert rules
+A metric called *Heartbeat* is included in each Log Analytics workspace. Each virtual machine connected to that workspace will send a heartbeat metric value each minute. Since the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to *Count* and the **Threshold** value to match the **Evaluation granularity**.
++
+### CPU Alerts
+#### Metric alert rules
+
+| Target | Metric |
+|:|:|
+| Host | Percentage CPU |
+| Windows guest | \Processor Information(_Total)\% Processor Time |
+| Linux guest | cpu/usage_active |
+
+#### Log alert rules
+
+**CPU utilization**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Processor" and Name == "UtilizationPercentage"
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer
+```
+
+**CPU utilization for all compute resources in a subscription**
+
+```kusto
+ InsightsMetrics
+ | where Origin == "vm.azm.ms"
+ | where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" and (_ResourceId contains "/providers/Microsoft.Compute/virtualMachines/" or _ResourceId contains "/providers/Microsoft.Compute/virtualMachineScaleSets/")
+ | where Namespace == "Processor" and Name == "UtilizationPercentage"<br>\| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
+```
+
+**CPU utilization for all compute resources in a resource group**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/" or _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachineScaleSets/"
+| where Namespace == "Processor" and Name == "UtilizationPercentage"<br>\| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
+```
+
+### Memory alerts
+
+#### Metric alert rules
+
+| Target | Metric |
+|:|:|
+| Windows guest | \Memory\% Committed Bytes in Use<br>\Memory\Available Bytes |
+| Linux guest | mem/available<br>mem/available_percent |
+
+#### Log alert rules
+
+**Available Memory in MB**
++
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Memory" and Name == "AvailableMB"
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer
+```
++
+**Available Memory in percentage**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Memory" and Name == "AvailableMB"
+| extend TotalMemory = toreal(todynamic(Tags)["vm.azm.ms/memorySizeMB"])<br>\| extend AvailableMemoryPercentage = (toreal(Val) / TotalMemory) * 100.0
+| summarize AggregatedValue = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer
+```
++
+### Disk alerts
+
+#### Metric alert rules
+
+| Target | Metric |
+|:|:|
+| Windows guest | \Logical Disk\(_Total)\% Free Space<br>\Logical Disk\(_Total)\Free Megabytes |
+| Linux guest | disk/free<br>disk/free_percent |
+
+#### Log query alert rules
+
+**Logical disk used - all disks on each computer**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage"
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
+```
++
+**Logical disk used - individual disks**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage"
+| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
+```
+
+**Logical disk IOPS**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "LogicalDisk" and Name == "TransfersPerSecond"
+| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m) ), Computer, _ResourceId, Disk |
+| Logical disk data rate | InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "LogicalDisk" and Name == "BytesPerSecond"
+| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m) , Computer, _ResourceId, Disk |
+```
+
+## Network alerts
+
+#### Metric alert rules
+
+| Target | Metric |
+|:|:|
+| Windows guest | \Network Interface\Bytes Sent/sec<br>\Logical Disk\(_Total)\Free Megabytes |
+| Linux guest | disk/free<br>disk/free_percent |
+
+### Log query alert rules
+
+**Network interfaces bytes received - all interfaces**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Network" and Name == "ReadBytesPerSecond"
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId |
+```
+
+**Network interfaces bytes received - individual interfaces**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Network" and Name == "ReadBytesPerSecond"
+| extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface |
+```
+
+**Network interfaces bytes sent - all interfaces**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Network" and Name == "WriteBytesPerSecond"
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId |
+```
+
+**Network interfaces bytes sent - individual interfaces**
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Network" and Name == "WriteBytesPerSecond"
+| extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface |
+```
++++++
+## Comparison of log query alert measures
+To compare the behavior of the two log alert measures, here's a walkthrough of each to create an alert when the CPU of a virtual machine exceeds 80%. The data we need is in the [InsightsMetrics table](/azure/azure-monitor/reference/tables/insightsmetrics). Following is a simple query that returns the records that need to be evaluated for the alert. Each type of alert rule will use a variant of this query.
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Processor" and Name == "UtilizationPercentage"
+```
+
+### Metric measurement
+The **metric measurement** measure will create a separate alert for each record in a query that has a value that exceeds a threshold defined in the alert rule. These alert rules are ideal for virtual machine performance data since they create individual alerts for each computer. The log query for this measure needs to return a value for each machine. The threshold in the alert rule will determine if the value should fire an alert.
+
+> [!NOTE]
+> Resource-centric log alert rules, currently in public preview, will simplify log query alerts for virtual machines and replace the functionality currently provided by metric measurement queries. You can use the machine as a target for the rule which will better identify it as the affected resource. You can also apply a single alert rule to all machines in a particular resource group or description. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
+
+#### Query
+The query for rules using metric measurement must include a record for each machine with a numeric property called *AggregatedValue*. This is the value that's compared to the threshold in the alert rule. The query doesn't need to compare this value to a threshold since the threshold is defined in the alert rule.
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Processor" and Name == "UtilizationPercentage"
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer
+```
++
+#### Alert rule
+Select **Logs** from the Azure Monitor menu to Open Log Analytics. Make sure that the correct workspace is selected for your scope. If not, click **Select scope** in the top left and select the correct workspace. Paste in the query that has the logic you want and click **Run** to verify that it returns the correct results.
++
+Click **New alert rule** to create a rule with the current query. The rule will use your workspace for the **Resource**.
+
+Click the **Condition** to view the configuration. The query is already filled in with a graphical view of the value returned from the query for each computer. You can select the computer in the **Pivoted on** dropdown.
+
+Scroll down to **Alert logic** and select **Metric measurement** for the **Based on** property. Since we want to alert when the utilization exceeds 80%, set the **Aggregate value** to *Greater than* and the **Threshold value** to *80*.
+
+Scroll down to **Alert logic** and select **Metric measurement** for the **Based on** property. Provide a **Threshold** value to compare to the value returned from the query. In this example, we'll use *80*. In **Trigger Alert Based On**, specify how many times the threshold must be exceeded before an alert is created. For example, you may not care if the processor exceeds a threshold once and then returns to normal, but you do care if it continues to exceed the threshold over multiple consecutive measurements. For this example, we'll set **Consecutive breaches** to *3*.
+
+Scroll down to **Evaluated based on**. **Period** specifies the time span for the query. Specify a value of **15** minutes, which means that the query will only use data collected in the last 15 minutes. **Frequency** specifies how often the query is run. A lower value will make the alert rule more responsive but also have a higher cost. Specify **15** to run the query every 15 minutes.
++
+### Number of results rule
+The **number of results** rule will create a single alert when a query returns at least a specified number of records. The log query in this type of alert rule will typically identify the alerting condition, while the threshold for the alert rule determines if a sufficient number of records are returned.
++
+#### Query
+In this example, the threshold for the CPU utilization is included in the query. The number of records returned from the query will be the number of machines exceeding that threshold. The threshold for the alert rule is the minimum number of machines required to fire the alert. If you want an alert when a single machine is in error, then the threshold for the alert rule will be zero.
+
+```kusto
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where Namespace == "Processor" and Name == "UtilizationPercentage"
+| summarize AverageUtilization = avg(Val) by Computer
+| where AverageUtilization > 80
+```
++
+#### Alert rule
+Select **Logs** from the Azure Monitor menu to Open Log Analytics. Make sure that the correct workspace is selected for your scope. If not, click **Select scope** in the top left and select the correct workspace. Paste in the query that has the logic you want and click **Run** to verify that it returns the correct results. You probably don't have a machine currently over threshold, so change to a lower threshold temporarily to verify results and then set the appropriate threshold before creating the alert rule.
+++
+Click **New alert rule** to create a rule with the current query. The rule will use your workspace for the **Resource**.
+
+Click the **Condition** to view the configuration. The query is already filled in with a graphical view of the number of records that would have been returned from that query over the past several minutes.
+
+Scroll down to **Alert logic** and select **Number of results** for the **Based on** property. For this example, we want an alert if any records are returned, which means that at least one virtual machine has a processor above 80%. Select *Greater than* for the **Operator** and *0* for the **Threshold value**.
+
+Scroll down to **Evaluated based on**. **Period** specifies the time span for the query. Specify a value of **15** minutes, which means that the query will only use data collected in the last 15 minutes. **Frequency** specifies how often the query is run. A lower value will make the alert rule more responsive but also have a higher cost. Specify **15** to run the query every 15 minutes.
++++++
+## Next steps
+
+* [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
+* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
+
+ Title: Monitor virtual machines with Azure Monitor - Analyze monitoring data
+description: Describes the different features of Azure Monitor that allow you to analyze the health and performance of your virtual machines.
++++ Last updated : 06/21/2021+++
+# Monitoring virtual machines with Azure Monitor - Analyze monitoring data
+This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It describes how to analyze monitoring data for your virtual machines after you've completed their configuration.
+
+Once youΓÇÖve enabled VM insights on your virtual machines, data will be available for analysis. This article describes the different features of Azure Monitor that allow you to analyze the health and performance of your virtual machines. Several of these features provide a different experience depending on whether you're analyzing a single machine or multiple. Each experience is described here with any unique behavior of each feature depending on which experience is being used.
+
+> [!NOTE]
+> This article includes guidance on analyzing data that's collected by Azure Monitor and VM insights. For data that you configure to monitor workloads running on virtual machines, see [Monitor workloads](monitor-virtual-machine-workloads.md).
+++
+## Single machine experience
+Access the single machine analysis experience from the **Monitoring** section of the menu in the Azure portal for each Azure virtual machine and Azure Arc enabled server. These options either limit the data that you're viewing to that machine or at least sets an initial filter for it. This allows you to focus on that particular machine, viewing its current performance and its trending over time, helping to identify any issues it maybe experiencing.
++
+- **Overview page.** Click the **Monitoring** tab to display [platform metrics](../essentials/data-platform-metrics.md) for the virtual machine host. This gives you a quick view of the trend over different time periods for important metrics such as CPU, network, and disk. Since these are host metrics though, counters from the guest operating system such as memory aren't included. Click on a graph to work with this data in [metrics explorer](../essentials/metrics-getting-started.md) where you can perform different aggregations and add additional counters for analysis.
++
+- **Activity log.** [Activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this to view the recent activity of the machine such as any configuration changes and when it's been stopped and started.
++
+- **Insights.** Open [VM insights](../vm/vminsights-overview.md) with the map for the current virtual machine selected. This shows you running processes on the machine, dependencies on other machines and external processes. See [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm) for details on using the map view for a single machine.
+
+ Click on the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. See [How to chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm) for details on using the map view for a single machine.
+
+- **Alerts.** Views [alerts](../alerts/alerts-overview.md) for the current virtual machine. These are only alerts that use the machine as the target resource, so there may be other alerts associated with it. You may need to use the **Alerts** option in the Azure Monitor menu to view alerts for all resources. See [Monitoring virtual machines with Azure Monitor - Alerts](monitor-virtual-machine-alerts.md) for details.
+
+- **Metrics.** Open metrics explorer with the scope set to the machine. This is the same as selecting one of the performance charts from the **Overview** page except that the metric isn't already added.
+
+- **Diagnostic settings.** Enable and configure [diagnostics extension](../agents/diagnostics-extension-overview.md) for the current virtual machine. Note that this option is different than the **Diagnostic settings** option for other Azure resources. Only enable the diagnostic extension if you need to send data to Azure Event Hubs or Azure Storage.
++
+- **Advisor recommendations.** Recommendations for the current virtual machine from [Azure Advisor](../../advisor/index.yml).
+
+- **Logs.** Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the current virtual machine. This allows you to select from a variety of existing queries to drill into log and performance data for only this machine.
++
+- **Connection monitor.** Open [Network Watcher Connection Monitor](../../network-watcher/connection-monitor-overview.md) to monitor connections between the current virtual machine and other virtual machines.
++
+- **Workbooks.** Open the workbook gallery with the VM insights workbooks for single machines. See [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks) for a list of the VM insights workbooks designed for individual machines.
+
+## Multiple machine experience
+Access the multiple machine analysis experience from the **Monitor** menu in the Azure portal for each Azure virtual machine and Azure Arc enabled server. These options provide access to all data so that you can select the virtual machines that you're interested in comparing.
+++
+- **Activity log.** [Activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for all resources. Create a filter for a **Resource Type** of Virtual Machines or Virtual Machine Scale Sets to view events for all of your machines.
+
+- **Alerts.** View [alerts](../alerts/alerts-overview.md) for all resources this includes alerts related to virtual machines but that are associated with the workspace. Create a filter for a **Resource Type** of Virtual Machines or Virtual Machine Scale Sets to view alerts for all of your machines.
++
+- **Metrics.** Open [metrics explorer](../essentials/metrics-getting-started.md) with no scope selected. This is particularly useful when you want to compare trends across multiple machines. Select a subscription or a resource group to quickly add a group of machines to analyze together.
+
+- **Logs** Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the workspace. This allows you to select from a variety of existing queries to drill into log and performance data for all machines. Or create a custom query to perform additional analysis.
++
+- **Workbooks** Open the workbook gallery with the VM insights workbooks for multiple machines. See [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks) for a list of the VM insights workbooks designed for multiple machines.
+
+- **Virtual Machines.** Open [VM insights](../vm/vminsights-overview.md) with the **Get Started** tab open. This displays all machines in your Azure subscription, identifying which are being monitored. Use this view to onboard individual machines that aren't already being monitored.
+
+ Click on the **Performance** tab to compare trends of critical performance counters for multiple machines over different periods of time. Select all machines in a subscription or resource group to include in the view. See [How to chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm) for details on using the map view for a single machine.
+
+ Click on the Map tab to view running processes on machines, dependencies between machines and external processes. Select all machines in a subscription or resource group, or inspect the data for a single machine. See [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-azure-monitor) for details on using the map view for multiple machines.
+
+## Compare Metrics and Logs
+For many features of Azure Monitor, you don't need to understand the different types of data it uses and where it's stored. You can use VM insights, for example, without any understanding of what data is being used to populate the Performance view, Map view, and workbooks. You just focus on the logic that you're analyzing. As you dig deeper though, there are cases where you will need to understand the difference between [Metrics](../essentials/data-platform-metrics.md) and [Logs](../logs/data-platform-logs.md) since different features of Azure Monitor use different kinds of data, and the type of alerting that you use for a particular scenario will depend on having that data available in a particular location.
++
+This can be confusing if you're new to Azure Monitor, but the following details should help you understand the differences between the types of data.
+
+- Any non-numeric data such as events is stored in Logs. Metrics can only include numeric data that's sampled at regular intervals.
+- Numeric data can be stored in both Metrics and Logs so it can be analyzed in different ways and support different types of alerts.
+- Performance data from the guest operating system will be sent to Logs by VM insights using the Log Analytics agent.
+- Performance data from the guest operating system will be sent to Metrics by Azure Monitor agent.
+
+> [!NOTE]
+> The Azure Monitor agent and send data to both Metrics and Logs. In this scenario, it's only used for Metrics since Log Analytics agent sends data to Logs and as currently required for VM insights. When VM insights uses the Azure Monitor agent, this scenario will be updated to remove the Log Analytics agent.
+
+## Analyze data with VM insights
+VM insights includes multiple performance charts that help you quickly get a status of the operation of your monitored machines, their trending performance over time, and dependencies between machines and processes. It also offers a consolidated view of different aspects of any monitored machine such as its properties and events collected in the Log Analytics workspace.
+
+The **Get Started** tab displays all machines in your Azure subscription,identifying which are being monitored. Use this view to quickly identify which machines aren't being monitored and to onboard individual machines that aren't already being monitored.
++
+The **Performance** view includes multiple charts with several key performance indicators (KPIs) to help you determine how well machines are performing. The charts show resource utilization over a period of time so you can identify bottlenecks, anomalies, or switch to a perspective listing each machine to view resource utilization based on the metric selected. See [How to chart performance with VM insights](vminsights-performance.md) for details on using the performance view.
++
+Use the **Map** view to see running processes on machines and their dependencies on other machines and external processes. You can change the time window for the view to determine if these dependencies have changed from another time period. See [Use the Map feature of VM insights to understand application components](vminsights-maps.md) for details on using the map view.
++
+## Analyze metric data with metrics explorer
+Metrics explorer allows you plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. See [Getting started with Azure Metrics Explorer](../essentials/metrics-getting-started.md) for details on using this tool.
+
+There are three namespaces used by virtual machines:
+
+| Namespace | Description | Requirement |
+|:|:|:|
+| Virtual Machine Host | Host metrics automatically collected for all Azure virtual machines. Detailed list of metrics at [Microsoft.Compute/virtualMachines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines). | Collected automatically with no configuration required. |
+| Guest (classic) | Limited set of guest operating system and application performance data. Available in metrics explorer but not other Azure Monitor features such as metric alerts. | [Diagnostic extension](../agents/diagnostics-extension-overview.md) installed. Data is read from Azure storage. |
+| Virtual Machine Guest | Guest operating system and application performance data available to all Azure Monitor features using metrics. | [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) installed with a [Data Collection Rule](../agents/data-collection-rule-overview.md). |
++
+## Analyze log data with Log Analytics
+Log Analytics allows you to perform custom analysis of your log data. Use Log Analytics when you want to dig deeper into the data used to create the views in VM insights. You may want to analyze different logic and aggregations of that data, correlate security data collected by Azure Security Center and Azure Sentinel with your health and availability data, or work with data collected for your [workloads](monitor-virtual-machine-workloads.md).
++
+You don't necessarily need to understand how to write a log query to use Log Analytics. There are multiple prebuilt queries that you can select and either run without modification or use as a start to a custom query. Click **Queries** at the top of the Log Analytics screen and view queries with a **Resource type** of **Virtual machines** or **Virtual machine Scale Sets**. See [Using queries in Azure Monitor Log Analytics](../logs/queries.md) for information on using these queries and [Log Analytics tutorial](../logs/log-analytics-tutorial.md) for a complete tutorial on using Log Analytics to run queries and work with their results.
++
+When you launch the Launch Log Analytics from VM insights using the properties pane in either the **Performance** or **Map** view, it lists the tables that have data for the selected computer. Click on a table to open Log Analytics with a simple query that returns all records in that table for the selected computer. Work with these results or modify the query for more complex analysis. The [scope](../log/../logs/scope.md) set to the workspace meaning that you have access data for all computers using that workspace.
++
+## Visualize data with workbooks
+[Workbooks](../visualize/workbooks-overview.MD) provide interactive reports in the Azure portal, combining different kinds of data into a single view. Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports. Workbooks are editable by any other team members who have access to the same Azure resources.
+
+Workbooks are helpful for scenarios such as:
+
+* Exploring the usage of your virtual machine when you don't know the metrics of interest in advance: CPU utilization, disk space, memory, network dependencies, etc. Unlike other usage analytics tools, workbooks let you combine multiple kinds of visualizations and analyses, making them great for this kind of free-form exploration.
+* Explaining to your team how a recently provisioned VM is performing, by showing metrics for key counters and other log events.
+* Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text, then show each usage metric and analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above- or below-target.
+* Reporting the impact of an outage on the usage of your VM, combining data, text explanation, and a discussion of next steps to prevent outages in the future.
+
+VM insights includes the following workbooks. You can use these workbooks or use them as a start to create custom workbooks to address your particular requirements.
+
+### Single virtual machine
+
+| Workbook | Description |
+|-|-|
+| Performance | Provides a customizable version of the Performance view that leverages all of the Log Analytics performance counters that you have enabled. |
+| Connections | Provides an in-depth view of the inbound and outbound connections from your VM. |
+
+### Multiple virtual machines
+
+| Workbook | Description |
+|-|-|
+| Performance | Provides a customizable version of the Top N List and Charts view in a single workbook that leverages all of the Log Analytics performance counters that you have enabled.|
+| Performance counters | A Top N chart view across a wide set of performance counters. |
+| Connections | Provides an in-depth view of the inbound and outbound connections from your monitored machines. |
+| Active Ports | Provides a list of the processes that have bound to the ports on the monitored machines and their activity in the chosen timeframe. |
+| Open Ports | Provides the number of ports open on your monitored machines and the details on those open ports. |
+| Failed Connections | Display the count of failed connections on your monitored machines, the failure trend, and if the percentage of failures is increasing over time. |
+| Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. |
+| TCP Traffic | A ranked report for your monitored machines and their sent, received, and total network traffic in a grid and displayed as a trend line. |
+| Traffic Comparison | Compare network traffic trends for a single machine or a group of machines. |
+| Log Analytics agent | Analyze the health of your agents including the number of agents connecting to a workspace, which are unhealthy, and the effect of the agent on the performance of the machine. This workbook isn't available from VM insights like the other workbooks. Go to **Workbooks** in the Azure Monitor menu and select **Public Templates**. |
+
+See [Create interactive reports VM insights with workbooks](vminsights-workbooks.md) for detailed instructions on creating your own custom workbooks.
++
+## Next steps
+
+* [Create alerts from collected data.](monitor-virtual-machine-alerts.md)
+* [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
+
+ Title: Monitor virtual machines with Azure Monitor - Configure monitoring
+description: Describes how to configure virtual machines for monitoring in Azure Monitor. Monitor virtual machines and their workloads with Azure Monitor scenario.
++++ Last updated : 06/21/2021+++
+# Monitor virtual machines with Azure Monitor - Configure monitoring
+This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It describes how to configure monitoring of your Azure and hybrid virtual machines in Azure Monitor.
+
+These are the most common Azure Monitor features to monitor the virtual machine host and its guest operating system. Depending on your particular environment and business requirements, you may not want to implement all features enabled by this configuration. Each section will describe what features are enabled by that configuration and whether it will potentially result in additional cost. This will help you to assess whether to perform each step of the configuration. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for detailed pricing information.
+
+A general description of each feature enabled by this configuration is provided in the [overview for scenario](monitor-virtual-machine.md). That article also includes links to content providing a detailed description of each feature to further help you assess your requirements.
++
+> [!NOTE]
+> The features enabled by the configuration support monitoring workloads running on your virtual machine, but you'll typically require additional configuration depending your particular workloads. See [Workload monitoring](monitor-virtual-machine-workloads.md) for details on this additional configuration.
+
+## Configuration overview
+The following table lists the steps that must be performed for this configuration. Each one links to the section with the detailed description of that configuration step.
+
+| Step | Description |
+|:|:|
+| [No configuration](#no-configuration) | Activity log and platform metrics for the Azure virtual machine hosts are automatically collected with no configuration. |
+| [Create and prepare Log Analytics workspace](#create-and-prepare-log-analytics-workspace) | Create a Log Analytics workspace and configure it for VM insights. Depending on your particular requirements, you may configure multiple workspaces. |
+| [Send Activity log to Log Analytics workspace](#send-activity-log-to-log-analytics-workspace) | Send the Activity log to the workspace to analyze it with other log data. |
+| [Prepare hybrid machines](#prepare-hybrid-machines) | Hybrid machines either need the Arc-enabled servers agent installed so they can be managed like Azure virtual machines or have their agents installed manually. |
+| [Enable VM insights on machines](#enable-vm-insights-on-machines) | Onboard machines to VM insights, which deploys required agents and begins collecting data from guest operating system. |
+| [Send guest performance data to Metrics](#send-guest-performance-data-to-metrics) |Install the Azure Monitor agent to send performance data to Azure Monitor Metrics. |
+++
+## No configuration
+Azure Monitor provides a basic level of monitoring for Azure virtual machines at no cost and with no configuration. Platform metrics for Azure virtual machines include important metrics such as CPU, network, and disk utilization and can be viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience) for the machine in the Azure portal. The Activity log is also collected automatically and includes the recent activity of the machine such as any configuration changers and when it's been stopped and started.
+
+## Create and prepare Log Analytics workspace
+You require at least one Log Analytics workspace to support VM insights and to collect telemetry from the Log Analytics agent. There is no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md) for details.
+
+Many environments will use a single workspace for all their virtual machines and other Azure resources they monitor. You can even share a workspace used by [Azure Security Center and Azure Sentinel](monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data. If you're just getting started with Azure Monitor, then start with a single workspace and consider creating additional workspaces as your requirements evolve.
+
+See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for complete details on logic that you should consider for designing a workspace configuration.
+
+### Multihoming agents
+Multihoming refers to a virtual machine that connects to multiple workspaces. There typically is little reason to multihome agents for Azure Monitor alone. Having an agent send data to multiple workspaces will most likely create duplicate data in each workspace, increasing your overall cost. You can combine data from multiple workspaces using [cross workspace queries](../logs/cross-workspace-query.md) and [workbooks](../visualizations/../visualize/workbooks-overview.md).
+
+One reason you may consider multihoming though is an environment with Azure Security Center or Azure Sentinel stored in a separate workspace than Azure Monitor. A machine being monitored by each service would need to send data to each workspace. The Windows agent supports this scenario since it can send to up to four workspaces. The Linux agent though can currently only send to a single workspace. If you want to use have Azure Monitor and Azure Security Center or Azure Sentinel monitor a common set of Linux machines, then the services would need to share the same workspace.
+
+Another other reason you may multihome your agents is in a [hybrid monitoring model](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview#hybrid-cloud-monitoring) where you use Azure Monitor and Operations Manager together to monitor the same machines. The Log Analytics agent and the Microsoft Management Agent for Operations Manager are the same agent, just sometimes referred to with different names.
+
+### Workspace permissions
+The access mode of the workspace defines which users are able to access different sets of data. See [Manage access to log data and workspaces in Azure Monitor](../logs/manage-access.md) for details on defining your access mode and configuring permissions. If you're just getting started with Azure Monitor, then consider accepting the defaults when you create your workspace and configure its permissions later.
++
+### Prepare the workspace for VM insights
+You must prepare each workspace for VM insights before enabling monitoring for any virtual machines. This installs required solutions that support data collection from the Log Analytics agent. This configuration only needs to be completed once for each workspace. See [Enable VM insights overview](vminsights-enable-overview.md) for details on this configuration using the Azure portal in addition to other methods.
++
+## Send Activity log to Log Analytics workspace
+You can view the platform metrics and Activity log collected for each virtual machine host in the Azure portal. Send this data into the same Log Analytics workspace as VM insights to analyze it with the other monitoring data collected for the virtual machine. You may have already done this when configuring monitoring for other Azure resources since there is a single Activity log for all resources in an Azure subscription.
+
+There is no cost for ingestion or retention of Activity log data. See [Create diagnostic settings](../essentials/diagnostic-settings.md) for details on creating a diagnostic setting to send the Activity log to your Log Analytics workspace.
+
+### Network requirements
+The Log Analytics agent for both Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. The Dependency agent uses the Log Analytics agent for all communication, so it doesn't require any additional ports. See [Network requirements](../agents/log-analytics-agent.md#network-requirements) for details on configuring your firewall and proxy.
++
+### Gateway
+The Log Analytics gateway allows you to channel communications from your on-premises machines through a single gateway. You can't use Azure Arc-enabled servers agent with the Log Analytics gateway though, so if your security policy requires a gateway, then you'll need to manually install the agents for your on-premises machines. See [Log Analytics gateway](../agents/gateway.md) for details on configuring and using the Log Analytics gateway.
+
+### Azure Private link
+Azure Private Link allows you to create a private endpoint for your Log Analytics workspace. Once configured, any connections to the workspace must be made through this private endpoint. Private link works using DNS overrides, so thereΓÇÖs no configuration requirement on individual agents. See [Use Azure Private Link to securely connect networks to Azure Monitor](../logs/private-link-security.md) for details on Azure private link.
+
+## Prepare hybrid machines
+A hybrid machine is ay machine not running in Azure. This is a virtual machine running in another cloud or hosted provide or a virtual or physical machine running on-premises in your data center. Use [Azure Arc enabled servers](../../azure-arc/servers/overview.md) on hybrid machines so you can manage them similar to your Azure virtual machines. VM insights in Azure Monitor allows you to use the same process to enable monitoring for Azure Arc enabled servers as you do for Azure virtual machines. See [Plan and deploy Arc-enabled servers](../../azure-arc/servers/plan-at-scale-deployment.md) for a complete guide on preparing your hybrid machines for Azure. This includes enabling individual machines and using [Azure Policy](../../governance/policy/overview.md) to enable your entire hybrid environment at scale.
+
+There is no additional cost for Azure Arc-enabled servers, but there may be some cost for different options that you enable. See [Azure Arc pricing](https://azure.microsoft.com/pricing/details/azure-arc/) for details. There will be a cost for the data collected in the workspace once the hybrid machines are enabled for VM insights.
+
+### Machines that can't use Azure Arc-enabled servers
+If you have any hybrid machines that match the following criteria, they won't be able to use Azure Arc-enabled servers. You still can monitor these machines with Azure Monitor, but you need to manually install their agents. See [Enable VM insights for a hybrid virtual machine](vminsights-enable-hybrid.md) to manually install the Log Analytics agent and Dependency agent on those hybrid machines.
+
+- The operating system of the machine is not supported by Azure Arc-enabled servers agent. See [Supported operating systems](../../azure-arc/servers/agent-overview.md#prerequisites).
+- Your security policy does not allow machines to connect directly to Azure. The Log Analytics agent can use the [Log Analytics gateway](../agents/gateway.md) whether or not Azure Arc-enabled servers is installed. The Arc-enabled servers agent though must connect directly to Azure.
+
+> [!NOTE]
+> Private endpoint for Arc-enabled servers is currently in public preview. This allows your hybrid machines to securely connect to Azure using a private IP address from your VNet.
+
+## Enable VM insights on machines
+When you enable VM insights on a machine, it installs the Log Analytics agent and Dependency agent, connects it to a workspace, and starts collecting performance data. This allows you to start using performance views and workbooks to analyze trends for a variety of guest operating system metrics, enables the map feature of VM insights for analyzing running processes and dependencies between machines, and collects data required for you to create a variety of alert rules.
+
+You can enable VM insights on individual machines using the same methods for Azure virtual machines and Azure Arc-enabled servers. This includes onboarding individual machines with the Azure portal or Resource Manager templates, or enabling machines at scale using Azure Policy. There is no direct cost for VM insights, but there is a cost for the ingestion and retention of data collected in the Log Analytics workspace.
+
+See [Enable VM insights overview](vminsights-enable-overview.md) for different options to enable VM insights for your machines. See [Enable VM insights by using Azure Policy](vminsights-enable-policy.md) to create a policy that will automatically enable VM insights on any new machines as they're created.
++++
+## Send guest performance data to Metrics
+The [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) will replace the Log Analytics agent when it fully supports Azure Monitor, Azure Security Center, and Azure Sentinel. Until that time, it can be installed with the Log Analytics agent to send performance data from the guest operating of machines to Azure Monitor Metrics. This allows you to evaluate this data with metrics explorer and use metric alerts.
+
+Azure Monitor agent requires at least one Data Collection Rule (DCR) that defines which data it should collect and where it should send that data. A single DCR can be used by any machines in the same resource group.
+
+Create a single DCR for each resource group with machines to monitor using the following data source. Be careful to not send data to Logs since this would be redundant with the data already being collected by Log Analytics agent.
+
+Data source type: Performance Counters
+Destination: Azure Monitor Metrics
+
+You can install Azure Monitor agent on individual machines using the same methods for Azure virtual machines and Azure Arc-enabled servers. This includes onboarding individual machines with the Azure portal or Resource Manager templates, or enabling machines at scale using Azure Policy. For hybrid machines that can't use Arc-enabled servers, you will need to install the agent manually.
+
+See [Create rule and association in Azure portal](../agents/data-collection-rule-azure-monitor-agent.md) to create a DCR and deploy the Azure Monitor agent to one or more agents using the Azure portal. Other installation methods are described at [Install the Azure Monitor agent](../agents/azure-monitor-agent-install.md). See [Deploy Azure Monitor at scale using Azure Policy](../deploy-scale.md#azure-monitor-agent) to create a policy that will automatically deploy the agent and DCR to any new machines as they're created.
++
+## Next steps
+
+* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
+* [Create alerts from collected data.](monitor-virtual-machine-alerts.md)
+* [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
azure-monitor Monitor Virtual Machine Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-security.md
+
+ Title: Monitor virtual machines with Azure Monitor - Security
+description: Describes services for monitoring security of virtual machines and how they relate to Azure Monitor.
++++ Last updated : 06/21/2021+++
+# Monitor virtual machines with Azure Monitor - Security monitoring
+This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It describes the Azure services for monitoring security for your virtual machines and how they relate to Azure Monitor. Azure Monitor was designed to monitor the availability and performance of your virtual machines and other cloud resources. While the operational data stored in Azure Monitor may be useful for investigating security incidents, other services in Azure were designed to monitor security.
+
+> [!IMPORTANT]
+> The security services have their own cost independent of Azure Monitor. Before configuring these services, refer to their pricing information to determine your appropriate investment in their usage.
+
+## Azure services for security monitoring
+Azure Monitor focuses on operational data including Activity Logs, Metrics, and Log Analytics supported sources including Windows Events (excluding security events), performance counters, logs, and Syslog. Security monitoring in Azure is performed by Azure Security Center and Azure Sentinel. These services each have additional cost, so you should determine their value in your environment before implementing them.
+
+[Azure Security Center](../../security-center/security-center-introduction.md) collects information about Azure resources and hybrid servers. Though capable of collecting security events, Azure Security Center focuses on collecting inventory data, assessment scan results, and policy audits to highlight vulnerabilities and recommend corrective actions. Noteworthy features include an interactive Network Map, Just-in-Time VM Access, Adaptive Network hardening, and Adaptive Application Controls to block suspicious executables.
+
+[Defender for Servers](../../security-center/azure-defender.md) is the server assessment solution provided by Azure Security Center. Defender for Servers can send Windows Security Events to Log Analytics. Azure Security Center does not rely on Windows Security Events for alerting or analysis. Using this feature allow centralized archival of events for investigation or other purposes.
+
+[Azure Sentinel](../../sentinel/overview.md) is a security information event management (SIEM) and security orchestration automated response (SOAR) solution. Sentinel collects security data from a wide range of Microsoft and 3rd party sources to provide alerting, visualization, and automation. This solution focuses on consolidating as many security logs as possible, including Windows security events. Sentinel can also collect Windows Security Event Logs and commonly shares a Log Analytics workspace with Azure Security Center. Security events can only be collected from Sentinel or Azure Security Center when they share the same workspace. Unlike Azure Security Center, security events are a key component of alerting and analysis in Azure Sentinel.
+
+[Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. Designed with a primary focus on protecting Windows end-user devices. Defender for Endpoint monitors workstations, servers, tablets, and cellphones with a variety of operating systems for security issues and vulnerabilities. Defender for Endpoint is closely aligned with Microsoft Endpoint Manager to collect data and provide security assessments. Data collection is primarily based on ETW trace logs and is stored in an isolated workspace.
+
+## Integration with Azure Monitor
+The following table lists the integration points for Azure Monitor with the security services. All the services use the same Log Analytics agent, which reduces complexity since there are no additional components being deployed to your virtual machines. Azure Security Center and Azure Sentinel store their data in a Log Analytics workspace so that you can use log queries to correlate data collected by the different services. Or create a custom workbook that combines security data and availability and performance data in a single view.
+
+| Integration point | Azure Monitor | Azure Security Center | Azure Sentinel | Defender for Endpoint |
+|:|:|:|:|:|
+| Collects security events | | X | X | X |
+| Stores data in Log Analytics workspace | X | X | X | |
+| Uses Log Analytics agent | X | X | X | X |
++
+## Workspace design considerations
+As described in [Monitor virtual machines with Azure Monitor - Configure monitoring](monitor-virtual-machine-configure.md#create-and-prepare-log-analytics-workspace), Azure Monitor and the security services require a Log Analytics workspace. Depending on your particular requirements, you may choose to share a common workspace or separate your availability and performance data from your security data. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for complete details on logic that you should consider for designing a workspace configuration.
+## Agent deployment
+You can configure Azure Security Center to automatically deploy the Log Analytics agent to Azure virtual machines. While this may seem redundant with Azure Monitor deploying the same agent, you will most likely want to enable both since they'll each perform their own configuration. For example, if Azure Security Center attempts provision a machine that's already being monitored by Azure Monitor, it will use the agent that's already installed and add the configuration for the Azure Security Center workspace.
++
+## Next steps
+
+* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
+* [Create alerts from collected data.](monitor-virtual-machine-alerts.md)
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
+
+ Title: Monitor virtual machines with Azure Monitor - Workloads
+description: Describes how to monitor the guest workloads of virtual machines in Azure Monitor.
++++ Last updated : 06/21/2021+++
+# Monitoring virtual machines with Azure Monitor - Workloads
+This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It describes how to monitor workloads that are running on the guest operating systems of your virtual machines. This includes details on analyzing and alerting on different sources of data on your virtual machines.
+++
+## Configure additional data collection
+VM insights collects only performance data from the guest operating system of enabled machines. You can enable the collection of additional performance data, events, and other monitoring data from the agent by configuring the Log Analytics workspace. It only needs to be configured once, since any agent connecting to the workspace will automatically download the configuration and immediately start collecting the defined data.
+
+See [Agent data sources in Azure Monitor](../agents/agent-data-sources.md) for a list of the data sources available and details on configuring them.
+
+> [!NOTE]
+> You cannot selectively configure data collection for different machines. All machines connected to the workspace will use the configuration for that workspace.
+++
+> [!IMPORTANT]
+> Be careful to only collect the data that you require since there are costs associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios
++
+## Convert management pack logic
+A significant number of customers implementing Azure Monitor currently monitor their virtual machine workloads using management packs in System Center Operations Manager. There are no migration tools to convert assets from Operations Manager to Azure Monitor since the platforms are fundamentally different. Your migration will instead constitute a standard Azure Monitor implementation while you continue to use Operations Manager. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
+
+Rather than attempting to replicate the entire functionality of a management pack, analyze the critical monitoring provided by the management pack and whether you can replicate those monitoring requirements using on the methods described in the previous sections. In many cases, you can configure data collection and alert rules in Azure Monitor that will replicate enough functionality that you can retire a particular management pack. Management packs can often include hundreds and even thousands of rules and monitors.
+
+In most scenarios Operations Manager combines data collection and alerting conditions in the same rule or monitor. In Azure Monitor, you must configure data collection and an alert rule for any alerting scenarios.
+
+One strategy is to focus on those monitors and rules that have triggered alerts in your environment. Refer to [existing reports available in Operations Manager](/system-center/scom/manage-reports-installed-during-setup) such as **Alerts** and **Most Common Alerts** which can help you identify alerts over time. You can also run the following query on the Operations Database to evaluate the most common recent alerts.
++
+```sql
+select AlertName, COUNT(AlertName) as 'Total Alerts' from
+Alert.vAlertResolutionState ars
+inner join Alert.vAlertDetail adt on ars.AlertGuid = adt.AlertGuid
+inner join Alert.vAlert alt on ars.AlertGuid = alt.AlertGuid
+group by AlertName
+order by 'Total Alerts' DESC
+```
+
+Evaluate the output to identify specific alerts for migration. Ignore any alerts that have been tuned out or known to be problematic. Review your management packs to identify any additional critical alerts of interest that have never fired.
+++
+## Windows or Syslog event
+This is a common monitoring scenario with the operating system and applications writing to the Windows events or Syslog. Create an alert as soon as a single event is found or wait for a series of matching events within a particular time window.
+
+To collect these events, configure Log Analytics workspace to collect [Windows events](../agents/data-sources-windows-events.md) or [Syslog events](../agents/data-sources-windows-events.md). There is a cost for the ingestion and retention of this data in the workspace.
+
+Windows events are stored in the [Event](/azure/azure-monitor/reference/tables/event) table and Syslog events in the [Syslog](/azure/azure-monitor/reference/tables/syslog) table in the Log Analytics workspace.
++
+### Sample log queries
+
+**Count the number of events by computer event log, and event type**
+
+```kusto
+Event
+| summarize count() by Computer, EventLog, EventLevelName
+| sort by Computer, EventLog, EventLevelName
+```
+
+**Count the number of events by computer event log, and event ID**
+
+```kusto
+Event
+| summarize count() by Computer, EventLog, EventLevelName
+| sort by Computer, EventLog, EventLevelName
+```
++
+### Sample alert rules
+The following sample creates an alert when a specific Windows event is created. It uses a metric measurement alert rule to create a separate alert for each computer.
+
+**Alert on a specific Windows event**
+This example shows an event in the Application log. Specify a threshold of 0 and consecutive breaches greater than 0.
+
+```kusto
+Event
+| where EventLog == "Application"
+| where EventID == 123
+| summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+```
+
+**Alert on Syslog events with a particular severity**
+The example shows error authorization events. Specify a threshold of 0 and consecutive breaches greater than 0.
+
+```kusto
+Syslog
+| where Facility == "auth"
+| where SeverityLevel == "err"
+| summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+```
++
+## Custom performance counters
+You may need performance counters created by applications or the guest operating system that aren't collected by VM insights. Configure the Log Analytics workspace to collect this [performance data](../agents/data-sources-windows-events.md). There is a cost for the ingestion and retention of this data in the workspace. Be careful to not collect performance data that's already being collected by VM insights.
+
+Performance data configured by the workspace are stored in the [Perf](/azure/azure-monitor/reference/tables/perf) table. This has a different structure than the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table used by VM insights.
+
+### Sample log queries
+
+See [Log queries with Performance records](../agents/data-sources-performance-counters.md#log-queries-with-performance-records) for example of log queries using custom performance counters.
+
+### Sample alerts
+
+**Alert on maximum value of a counter**
+
+```kusto
+Perf
+| where CounterName == "My Counter"
+| summarize AggregatedValue = max(CounterValue) by Computer
+```
+
+**Alert on average value of a counter**
+
+```kusto
+Perf
+| where CounterName == "My Counter"
+| summarize AggregatedValue = avg(CounterValue) by Computer
+```
+
+## Text logs
+Some applications will write events written to a text log stored on the virtual machine. Define a [custom log](../agents/data-sources-custom-logs.md) in the Log Analytics workspace to collect these events. You define the location of the text log and its detailed configuration. There is a cost for the ingestion and retention of this data in the workspace.
+
+Events from the text log are stored in a table named similar to **MyTable_CL**. You define the name and structure of the log when you configure it.
+
+### Sample log queries
+The column names used here are for example only. You define the column names for your particular log when you define it. The column names for your log will most likely be different.
+
+**Count the number of events by code**
+
+```kusto
+MyApp_CL
+| summarize count() by code
+```
+
+### Sample alert rule
+
+**Alert on any error event**
+
+```kusto
+MyApp_CL
+| where status == "Error"
+| summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+```
+## IIS logs
+IIS running on Windows machines will write logs to a text file. Configure Log Analytics workspace to collect [IIS logs](../agents/data-sources-iis-logs.md). There is a cost for the ingestion and retention of this data in the workspace.
+
+Records from the IIS log are stored in the [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) table in the Log Analytics workspace.
+
+### Sample log queries
++
+**Count of IIS log entries by URL for the host www.contoso.com**
+
+```kusto
+W3CIISLog
+| where csHost=="www.contoso.com"
+| summarize count() by csUriStem
+```
+
+**Total bytes received by each IIS machine**
+
+```kusto
+W3CIISLog
+| summarize sum(csBytes) by Computer
+```
+
+### Sample alert rule
+
+**Alert on any record with a return status of 500**
+
+```kusto
+W3CIISLog
+| where scStatus==500
+| summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+```
+
+## Service or daemon
+To monitor the status of a Windows service or Linux daemon, enable the [Change Tracking and Inventory](../../automation/change-tracking/overview.md) solution in [Azure Automation](../../automation/automation-intro.md). Azure Monitor has no ability to monitor the status a service or daemon, There are some possible methods such as looking for events in the Windows event log, but this is unreliable. You could also look for the process associated with the service running on the machine from the [VMProcess](/azure/azure-monitor/reference/tables/vmprocess) table, but this only updated every hour which is not typically sufficient for alerting.
+
+> [!NOTE]
+> The Change Tracking and Analysis solution is different the [Change Analysis](vminsights-change-analysis.md) feature in VM insights. This feature is in public preview and not yet included in this scenario.
+
+See [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory) for different options to enable the Change Tracking solution on your virtual machines. This includes methods to configure virtual machines at scale. You will have to [create an Azure Automation account](../../automation/automation-quickstart-create-account.md) to support the solution.
+
+When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for log query alert rules.
+
+| Table | Description |
+|:|:|
+| [ConfigurationChange](/azure/azure-monitor/reference/tables/configurationdata) | Changes to in-guest configuration data. |
+| [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Last reported state for in-guest configuration data. |
++
+### Sample log queries
+
+**List all services and daemons that have recently started**
+
+```kusto
+ConfigurationChange
+| where ConfigChangeType == "Daemons" or ConfigChangeType == "WindowsServices"
+| where SvcState == "Running"
+| sort by Computer, SvcName
+```
++
+### Alert rule samples
++
+**Alert when a specific service stops**
++
+```kusto
+ConfigurationData
+| where SvcName == "W3SVC"
+| where SvcState == "Stopped"
+| where ConfigDataType == "WindowsServices"
+| where SvcStartupType == "Auto"
+| summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
+```
+
+**Alert when one of a set of services stops**
++
+```kusto
+let services = dynamic(["omskd","cshost","schedule","wuauserv","heathservice","efs","wsusservice","SrmSvc","CertSvc","wmsvc","vpxd","winmgmt","netman","smsexec","w3svc","sms_site_vss_writer","ccmexe","spooler","eventsystem","netlogon","kdc","ntds","lsmserv","gpsvc","dns","dfsr","dfs","dhcp","DNSCache","dmserver","messenger","w32time","plugplay","rpcss","lanmanserver","lmhosts","eventlog","lanmanworkstation","wnirm","mpssvc","dhcpserver","VSS","ClusSvc","MSExchangeTransport","MSExchangeIS"]);
+ConfigurationData
+| where ConfigDataType == "WindowsServices"
+| where SvcStartupType == "Auto"
+| where SvcName in (services)
+| where SvcState == "Stopped"
+| project TimeGenerated, Computer, SvcName, SvcDisplayName, SvcState
+| summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
+```
+++
+## Port monitoring
+Port monitoring verifies that a machine is listening on a particular port. There are two potential strategies for port monitoring described below.
+
+### Dependency agent tables
+Use the [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze ports and connections on the machine. The VMBoundPort table is updated every minute with each process running on the computer and the port is listening on. You could create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isnΓÇÖt listening on a particular port.
+
+### Sample log queries
+
+**Review the count of ports open on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.**
+
+```kusto
+VMBoundPort
+| where Ip != "127.0.0.1"
+| summarize by Computer, Machine, Port, Protocol
+| summarize OpenPorts=count() by Computer, Machine
+| order by OpenPorts desc
+```
+
+**List the bound ports on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.**
+
+```kusto
+VMBoundPort
+| distinct Computer, Port, ProcessName
+```
++
+**Analyze network activity by port to determine how your application or service is configured.**
+
+```kusto
+VMBoundPort
+| where Ip != "127.0.0.1"
+| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
+| project-away TimeGenerated
+| order by Machine, Computer, Port, Ip, ProcessName
+```
+
+**Bytes sent and received trends for your VMs.**
+
+```kusto
+VMConnection
+| summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
+| order by Computer desc
+| render timechart
+```
+
+**Connection failures over time, to determine if the failure rate is stable or changing.**
+
+```kusto
+VMConnection
+| where Computer == <replace this with a computer name, e.g. ΓÇÿacme-demoΓÇÖ>
+| extend bythehour = datetime_part("hour", TimeGenerated)
+| project bythehour, LinksFailed
+| summarize failCount = count() by bythehour
+| sort by bythehour asc
+| render timechart
+```
+
+**Link status trends, to analyze the behavior and connection status of a machine.**
+
+```kusto
+VMConnection
+| where Computer == <replace this with a computer name, e.g. ΓÇÿacme-demoΓÇÖ>
+| summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
+| render timechart
+```
+
+### Connection Manager
+The [Connection Monitor](../../network-watcher/connection-monitor-overview.md) feature of [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) can be used to test connections to a port on a virtual machine. This verifies that the machine is listening on the port and that itΓÇÖs accessible on the network.
+Connection Manager requires the Network Watcher extension on client machine initiating the test. It does not need to be installed on the machine being tested. See [Tutorial - Monitor network communication using the Azure portal](../../network-watcher/connection-monitor.md) for details.
+
+There is an additional cost for Connection Manager. See [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) for details.
+
+## Run a process on local machine
+Monitoring of some workloads requires a local process, for example a PowerShell script running on the local machine to connect to an application and collect and/or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md) which is part of [Azure Automation](../../automation/automation-intro.md) to run a local PowerShell script. There is no direct charge for hybrid runbook worker, but there is a cost for each runbook that it uses.
+
+The runbook can access any resources on the local machine to gather required data, but it canΓÇÖt send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log and then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry.
+
+## Synthetic transactions
+A synthetic transaction connects to an application or service running on a machine, simulating a user connection or actual user traffic. If the application is available, then you can assume that the machine is running properly. [Application insights](../app/app-insights-overview.md) in Azure Monitor provides this functionality. This only works for applications that are accessible from the internet. For internal applications, you must open a firewall to allow access from specific Microsoft URLs performing the test, or use an alternate monitoring solution such as System Center Operations Manager.
+
+|Method | Description |
+|:|:|
+| [URL test](../app/monitor-web-app-availability.md) | Ensures that HTTP is available and returning a web page. |
+| [Multistep test](../app/availability-multistep.md) | Simulates a user session. |
++
+## SQL Server
+
+Use [SQL insights](../insights/sql-insights-overview.md) to monitor SQL Server running on your virtual machines.
+++
+## Next steps
+
+* [Learn how to analyze data in Azure Monitor logs using log queries.](../logs/get-started-queries.md)
+* [Learn about alerts using metrics and logs in Azure Monitor.](../alerts/alerts-overview.md)
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine.md
+
+ Title: Monitor virtual machines with Azure Monitor
+description: Describes how to use Azure Monitor monitor the health and performance of virtual machines and the workloads.
++++ Last updated : 06/02/2021+++
+# Monitoring virtual machines with Azure Monitor
+This scenario describes how to use Azure Monitor monitor the health and performance of virtual machines and the workloads. It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
+
+This article introduces the scenario, provides general concepts for monitoring virtual machines in Azure Monitor. If you want to jump right into a specific area then please refer to the other articles that are part of this scenario described in the following table.
+
+| Article | Description |
+|:|:|
+| [Enable monitoring](monitor-virtual-machine-configure.md) | Configuration of Azure Monitor required to monitor virtual machines. This includes enabling VM insights and enabling each virtual machine for monitoring. |
+| [Analyze](monitor-virtual-machine-analyze.md) | Analyze monitoring data collected by Azure Monitor from virtual machines and their guest operating systems and applications to identify trends and critical information. |
+| [Alerts](monitor-virtual-machine-alerts.md) | Create alerts to proactively identify critical issues in your monitoring data. |
+| [Monitor security](monitor-virtual-machine-security.md) | Describes Azure services for monitoring security of virtual machines. |
+| [Monitor workloads](monitor-virtual-machine-workloads.md) | Monitor applications and other workloads running on your virtual machines. |
+
+> [!IMPORTANT]
+> This scenario does not include features that are not generally available. This includes features in public preview such as [virtual machine guest health](vminsights-health-overview.md) that have the potential to significantly modify the recommendations made here. The scenario will be updated as preview features move into general availability.
++
+## Types of machines
+This scenario includes monitoring of the following type of machines using Azure Monitor. Many of the processes described here are the same regardless of the type of machine. Considerations for different types of machines are clearly identified where appropriate.
+
+- Azure virtual machines
+- Azure virtual machine scale sets
+- Hybrid machines which are virtual machines running in other clouds, with a managed service provider, or on-premises. They also include physical machines running on-premises.
+
+## Layers of monitoring
+There are fundamentally three layers to a virtual machine that require monitoring. Each layer has a distinct set of telemetry and monitoring requirements.
++
+| Layer | Description |
+|:|:|
+| Virtual machine host | This is the host virtual machine in Azure. Azure Monitor has no access to the host in other clouds but must rely on information collected from the guest operating system. The host can be useful for tracking activity such as configuration changes, but typically isn't used for significant alerting. |
+| Guest operating system | Operating system running on the virtual machine which is some version of either Windows or Linux. A significant amount of monitoring data is available from the guest operating system such as performance data and events. VM insights in Azure Monitor provides a significant amount of logic for monitoring the health and performance of the guest operating system. |
+| Workloads | Workloads running in the guest operating system that support your business applications. Azure Monitor provides predefined monitoring for some workloads, you typically need to configure data collection and alerting for other workloads using monitoring data that they generate. |
+| Application | The business application that depends on your virtual machines can be monitored using [Application Insights](../app/app-insights-overview.md).
+++
+## VM insights
+This scenario will focus on [VM insights](../vm/vminsights-overview.md), which is the primary feature in Azure Monitor for monitoring virtual machines, providing the following features.
+
+- Simplified onboarding of agents to enable monitoring of a virtual machine guest operating system and workloads.
+- Pre-defined trending performance charts and workbooks that allow you to analyze core performance metrics from the virtual machine's guest operating system.
+- Dependency map that displays processes running on each virtual machine and the interconnected components with other machines and external sources.
++
+## Agents
+Any monitoring tool such as Azure Monitor requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor currently has multiple agents that collect different data, send data to different locations, and support different features. VM insights manages the deployment and configuration of the agents that most customers will use, but you should be aware of the different agents that are described in the following table in case you require the particular scenarios that they support. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a detailed description and comparison of the different agents.
+
+> [!NOTE]
+> When the Azure Monitor agent fully supported VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent, diagnostic extension, and Telegraf agent.
+
+- [Azure Monitor agent](../agents/agents-overview.md#log-analytics-agent) - Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension.
+- [Log Analytics agent](../agents/agents-overview.md#log-analytics-agent) - Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This is the same agent used for System Center Operations Manager.
+- [Dependency agent](../agents/agents-overview.md#dependency-agent) - Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions.
+- [Azure Diagnostic extension](../agents/agents-overview.md#azure-diagnostics-extension) - Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.
++++
+## Next steps
+
+* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
azure-monitor Vminsights Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-alerts.md
- Title: Alerts from VM insights
-description: Describes how to create alert rules from performance data collected by VM insights.
--- Previously updated : 11/10/2020---
-# How to create alerts from VM insights
-[Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. VM insights does not include pre-configured alert rules, but you can create your own based on data that it collects. This article provides guidance on creating alert rules, including a set of sample queries.
-
-> [!IMPORTANT]
-> The alerts described in this article are based on log queries from data collected VM insights. This is different than the alerts created by [Azure Monitor for VM guest health](vminsights-health-overview.md) which is a feature currently in public preview. As this feature nears general availability, guidance for alerting will be consolidated.
--
-## Alert rule types
-Azure Monitor has [different types of alert rules](../alerts/alerts-overview.md#what-you-can-alert-on) based on the data being used to create the alert. All data collected by VM insights is stored in Azure Monitor Logs which supports [log alerts](../alerts/alerts-log.md). You cannot currently use [metric alerts](../alerts/alerts-log.md) with performance data collected from VM insights because the data is not collected into Azure Monitor Metrics. To collect data for metric alerts, install the [diagnostics extension](../agents/diagnostics-extension-overview.md) for Windows VMs or the [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md) for Linux VMs to collect performance data into Metrics.
-
-There are two types of log alerts in Azure Monitor:
--- [Number of results alerts](../alerts/alerts-unified-log.md#count-of-the-results-table-rows) create a single alert when a query returns at least a specified number of records. These are ideal for non-numeric data such and Windows and Syslog events collected by the [Log Analytics agent](../agents/log-analytics-agent.md) or for analyzing performance trends across multiple computers.-- [Metric measurement alerts](../alerts/alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) create a separate alert for each record in a query that has a value that exceeds a threshold defined in the alert rule. These alert rules are ideal for performance data collected by VM insights since they can create individual alerts for each computer.--
-## Alert rule walkthrough
-This section walks through the creation of a metric measurement alert rule using performance data from VM insights. You can use this basic process with a variety of log queries to alert on different performance counters.
-
-Start by creating a new alert rule following the procedure in [Create, view, and manage log alerts using Azure Monitor](../alerts/alerts-log.md). For the **Resource**, select the Log Analytics workspace that Azure Monitor VMs uses in your subscription. Since the target resource for log alert rules is always a Log Analytics workspace, the log query must include any filter for particular virtual machines or virtual machine scale sets.
-
-For the **Condition** of the alert rule, use one of the queries in the [section below](#sample-alert-queries) as the **Search query**. The query must return a numeric property called *AggregatedValue*. It should summarize the data by computer so that you can create a separate alert for each virtual machine that exceeds the threshold.
-
-In the **Alert logic**, select **Metric measurement** and then provide a **Threshold value**. In **Trigger Alert Based On**, specify how many times the threshold must be exceeded before an alert is created. For example, you probably don't care if the processor exceeds a threshold once and then returns to normal, but you do care if it continues to exceed the threshold over multiple consecutive measurements.
-
-The **Evaluated based on** section defines how often the query is run and the time window for the query. In the example shown below, the query will run every 15 minutes and evaluate performance values collected over the previous 15 minutes.
--
-![Metric measurement alert rule](media/vminsights-alerts/metric-measurement-alert.png)
-
-## Sample alert queries
-The following queries can be used with a metric measurement alert rule using performance data collected by VM insights. Each summarizes data by computer so that an alert is created for each computer with a value that exceeds the threshold.
-
-### CPU utilization
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
-```
-
-### Available Memory in MB
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Memory" and Name == "AvailableMB"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
-```
-
-### Available Memory in percentage
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Memory" and Name == "AvailableMB"
-| extend TotalMemory = toreal(todynamic(Tags)["vm.azm.ms/memorySizeMB"])
-| extend AvailableMemoryPercentage = (toreal(Val) / TotalMemory) * 100.0
-| summarize AggregatedValue = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer, _ResourceId
-```
-
-### Logical disk used - all disks on each computer
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
-```
-
-### Logical disk used - individual disks
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage"
-| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
-```
-
-### Logical disk IOPS
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "LogicalDisk" and Name == "TransfersPerSecond"
-| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
-```
-
-### Logical disk data rate
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "LogicalDisk" and Name == "BytesPerSecond"
-| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
-```
-
-### Network interfaces bytes received - all interfaces
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Network" and Name == "ReadBytesPerSecond"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
-```
-
-### Network interfaces bytes received - individual interfaces
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Network" and Name == "ReadBytesPerSecond"
-| extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface
-```
-
-### Network interfaces bytes sent - all interfaces
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Network" and Name == "WriteBytesPerSecond"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
-```
-
-### Network interfaces bytes sent - individual interfaces
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Network" and Name == "WriteBytesPerSecond"
-| extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface
-```
-
-### Virtual machine scale set
-Modify with your subscription ID, resource group, and virtual machine scale set name.
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachineScaleSets/my-vm-scaleset"
-| where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
-```
-
-### Specific virtual machine
-Modify with your subscription ID, resource group, and VM name.
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where _ResourceId =~ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-vm"
-| where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m)
-```
-
-### CPU utilization for all compute resources in a subscription
-Modify with your subscription ID.
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" and (_ResourceId contains "/providers/Microsoft.Compute/virtualMachines/" or _ResourceId contains "/providers/Microsoft.Compute/virtualMachineScaleSets/")
-| where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
-```
-
-### CPU utilization for all compute resources in a resource group
-Modify with your subscription ID and resource group.
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/"
-or _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachineScaleSets/"
-| where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
-
-```
-
-## Next steps
--- Learn more about [alerts in Azure Monitor](../alerts/alerts-overview.md).-- Learn more about [log queries using data from VM insights](vminsights-log-search.md).
azure-monitor Vminsights Log Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-log-search.md
VM insights collects performance and connection metrics, computer and process in
## Map records
-One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is on-boarded to VM insights Map feature. These records have the properties in the following tables. The fields and values in the ServiceMapComputer_CL events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource.
+One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is added to VM insights. The fields and values in the ServiceMapComputer_CL events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource.
There are internally generated properties you can use to identify unique processes and computers:
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-workbooks.md
Workbooks are helpful for scenarios such as:
* Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text, then show each usage metric and analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above- or below-target. * Reporting the impact of an outage on the usage of your VM, combining data, text explanation, and a discussion of next steps to prevent outages in the future.
-The following table summarizes the workbooks that VM insights includes to get you started.
-
-| Workbook | Description | Scope |
-|-|-|-|
-| Performance | Provides a customizable version of our Top N List and Charts view in a single workbook that leverages all of the Log Analytics performance counters that you have enabled.| Multiple VMs |
-| Performance counters | A Top N chart view across a wide set of performance counters. | Multiple VMs |
-| Connections | Connections provides an in-depth view of the inbound and outbound connections from your monitored VMs. | Multiple VMs |
-| Active Ports | Provides a list of the processes that have bound to the ports on the monitored VMs and their activity in the chosen timeframe. | Multiple VMs |
-| Open Ports | Provides the number of ports open on your monitored VMs and the details on those open ports. | Multiple VMs |
-| Failed Connections | Display the count of failed connections on your monitored VMs, the failure trend, and if the percentage of failures is increasing over time. | Multiple VMs |
-| Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. | Multiple VMs |
-| TCP Traffic | A ranked report for your monitored VMs and their sent, received, and total network traffic in a grid and displayed as a trend line. | Multiple VMs |
-| Traffic Comparison | This workbooks lets you compare network traffic trends for a single machine or a group of machines. | Multiple VMs |
-| Performance | Provides a customizable version of our Performance view that leverages all of the Log Analytics performance counters that you have enabled. | Single VM |
-| Connections | Connections provides an in-depth view of the inbound and outbound connections from your VM. | Single VM |
-
+## VM insights workbooks
+VM insights includes the following workbooks. You can use these workbooks or use them as a start to create custom workbooks to address your particular requirements.
+
+### Single virtual machine
+
+| Workbook | Description |
+|-|-|
+| Performance | Provides a customizable version of the Performance view that leverages all of the Log Analytics performance counters that you have enabled. |
+| Connections | Connections provides an in-depth view of the inbound and outbound connections from your VM. |
+
+### Multiple virtual machines
+
+| Workbook | Description |
+|-|-|
+| Performance | Provides a customizable version of the Top N List and Charts view in a single workbook that leverages all of the Log Analytics performance counters that you have enabled.|
+| Performance counters | A Top N chart view across a wide set of performance counters. |
+| Connections | Connections provides an in-depth view of the inbound and outbound connections from your monitored VMs. |
+| Active Ports | Provides a list of the processes that have bound to the ports on the monitored VMs and their activity in the chosen timeframe. |
+| Open Ports | Provides the number of ports open on your monitored VMs and the details on those open ports. |
+| Failed Connections | Display the count of failed connections on your monitored VMs, the failure trend, and if the percentage of failures is increasing over time. |
+| Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. |
+| TCP Traffic | A ranked report for your monitored VMs and their sent, received, and total network traffic in a grid and displayed as a trend line. |
+| Traffic Comparison | This workbooks lets you compare network traffic trends for a single machine or a group of machines. |
+ ## Creating a new workbook A workbook is made up of sections consisting of independently editable charts, tables, text, and input controls. To better understand workbooks, let's start by opening a template and walk through creating a custom workbook.
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Use the Azure portal, PowerShell, or the Azure CLI to delete the resource group.
> [!div class="nextstepaction"] > [Create an NFS volume](azure-netapp-files-create-volumes.md)+
+> [!div class="nextstepaction"]
+> [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md)
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Title: CI/CD with Azure Pipelines and Bicep files
-description: Describes how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use a PowerShell script, or copy files to a staging location and deploy from there.
+description: Describes how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use an Azure CLI task to deploy a Bicep file.
Previously updated : 06/01/2021 Last updated : 06/23/2021 # Integrate Bicep with Azure Pipelines
-You can integrate Bicep files with Azure Pipelines for continuous integration and continuous deployment (CI/CD). In this article, you learn how to build a Bicep file into an Azure Resource Manager template (ARM template) and then use two advanced ways to deploy templates with Azure Pipelines.
-
-## Select your option
-
-Before proceeding with this article, let's consider the different options for deploying an ARM template from a pipeline.
-
-* **Use Azure CLI task**. Use this task to run `az bicep build` to build your Bicep files before deploying the ARM templates.
-
-* **Use ARM template deployment task**. This option is the easiest option. This approach works when you want to deploy an ARM template directly from a repository. This option isn't covered in this article but instead is covered in the tutorial [Continuous integration of ARM templates with Azure Pipelines](../templates/deployment-tutorial-pipeline.md). It shows how to use the [ARM template deployment task](https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceManagerTemplateDeploymentV3/README.md) to deploy a template from your GitHub repository.
-
-* **Add task that runs an Azure PowerShell script**. This option has the advantage of providing consistency throughout the development life cycle because you can use the same script that you used when running local tests. Your script deploys the template but can also perform other operations such as getting values to use as parameters. This option is shown in this article. See [Azure PowerShell task](#azure-powershell-task).
-
- Visual Studio provides the [Azure Resource Group project](../templates/create-visual-studio-deployment-project.md) that includes a PowerShell script. The script stages artifacts from your project to a storage account that Resource Manager can access. Artifacts are items in your project such as linked templates, scripts, and application binaries. If you want to continue using the script from the project, use the PowerShell script task shown in this article.
-
-* **Add tasks to copy and deploy tasks**. This option offers a convenient alternative to the project script. You configure two tasks in the pipeline. One task stages the artifacts to an accessible location. The other task deploys the template from that location. This option is shown in this article. See [Copy and deploy tasks](#copy-and-deploy-tasks).
+You can integrate Bicep files with Azure Pipelines for continuous integration and continuous deployment (CI/CD). In this article, you learn how to use an Azure CLI pipeline task to deploy a Bicep file.
## Prepare your project
-This article assumes your ARM template and Azure DevOps organization are ready for creating the pipeline. The following steps show how to make sure you're ready:
+This article assumes your Bicep file and Azure DevOps organization are ready for creating the pipeline. The following steps show how to make sure you're ready:
* You have an Azure DevOps organization. If you don't have one, [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up). If your team already has an Azure DevOps organization, make sure you're an administrator of the Azure DevOps project that you want to use. * You've configured a [service connection](/azure/devops/pipelines/library/connect-to-azure) to your Azure subscription. The tasks in the pipeline execute under the identity of the service principal. For steps to create the connection, see [Create a DevOps project](../templates/deployment-tutorial-pipeline.md#create-a-devops-project).
-* You have an [ARM template](../templates/quickstart-create-templates-use-visual-studio-code.md) that defines the infrastructure for your project.
+* You have a [Bicep file](../templates/quickstart-create-bicep-use-visual-studio-code.md) that defines the infrastructure for your project.
## Create pipeline
You're ready to either add an Azure PowerShell task or the copy file and deploy
## Azure CLI task
-This section shows how to build a Bicep file into an ARM template before the template is deployed.
-
-The following YAML file builds a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/deploy/azure-cli):
+The following YAML file creates a resource group and deploy a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/deploy/azure-cli):
```yml trigger: - master
+name: Deploy Bicep files
+
+variables:
+ vmImageName: 'ubuntu-latest'
+
+ azureServiceConnection: '<your-connection-name>'
+ resourceGroupName: '<your-resource-group-name>'
+ location: '<your-resource-group-location>'
+ templateFile: './azuredeploy.bicep'
pool:
- vmImage: 'ubuntu-latest'
+ vmImage: $(vmImageName)
steps: - task: AzureCLI@2 inputs:
- azureSubscription: 'script-connection'
+ azureSubscription: $(azureServiceConnection)
scriptType: bash scriptLocation: inlineScript inlineScript: | az --version
- az bicep build --file ./azuredeploy.bicep
+ az group create --name $(resourceGroupName) --location $(location)
+ az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile)
```
-For `azureSubscription`, provide the name of the service connection you created.
-
-For `scriptType`, use **bash**.
-
-For `scriptLocation`, use **inlineScript**, or **scriptPath**. If you specify **scriptPath**, you will also need to specify a `scriptPath` parameter.
-
-In `inlineScript`, specify your script lines. The script provided in the sample builds a bicep file called *azuredeploy.bicep* and exists in the root of the repository.
-
-## Azure PowerShell task
-
-This section shows how to configure continuous deployment by using a single task that runs the PowerShell script in your project. If you need a PowerShell script that deploys a template, see [Deploy-AzTemplate.ps1](https://github.com/Azure/azure-quickstart-templates/blob/master/Deploy-AzTemplate.ps1) or [Deploy-AzureResourceGroup.ps1](https://github.com/Azure/azure-quickstart-templates/blob/master/Deploy-AzureResourceGroup.ps1).
-
-The following YAML file creates an [Azure PowerShell task](/azure/devops/pipelines/tasks/deploy/azure-powershell):
-
-```yml
-trigger:
-- master-
-pool:
- vmImage: 'ubuntu-latest'
-
-steps:
-- task: AzurePowerShell@5
- inputs:
- azureSubscription: 'script-connection'
- ScriptType: 'FilePath'
- ScriptPath: './Deploy-AzTemplate.ps1'
- ScriptArguments: -Location 'centralus' -ResourceGroupName 'demogroup' -TemplateFile templates\mainTemplate.json
- azurePowerShellVersion: 'LatestVersion'
-```
-
-When you set the task to `AzurePowerShell@5`, the pipeline uses the [Az module](/powershell/azure/new-azureps-module-az).
-
-```yaml
-steps:
-- task: AzurePowerShell@3
-```
-
-For `azureSubscription`, provide the name of the service connection you created.
-
-```yaml
-inputs:
- azureSubscription: '<your-connection-name>'
-```
-
-For `scriptPath`, provide the relative path from the pipeline file to your script. You can look in your repository to see the path.
-
-```yaml
-ScriptPath: '<your-relative-path>/<script-file-name>.ps1'
-```
-
-In `ScriptArguments`, provide any parameters needed by your script. The following example shows some parameters for a script, but you'll need to customize the parameters for your script.
-
-```yaml
-ScriptArguments: -Location 'centralus' -ResourceGroupName 'demogroup' -TemplateFile templates\mainTemplate.json
-```
-
-When you select **Save**, the build pipeline is automatically run. Go back to the summary for your build pipeline, and watch the status.
-
-![View results](./media/add-template-to-azure-pipelines/view-results.png)
-
-You can select the currently running pipeline to see details about the tasks. When it finishes, you see the results for each step.
-
-## Copy and deploy tasks
-
-This section shows how to configure continuous deployment by using a two tasks. The first task stages the artifacts to a storage account and the second task deploy the template.
-
-To copy files to a storage account, the service principal for the service connection must be assigned the Storage Blob Data Contributor or Storage Blob Data Owner role. For more information, see [Get started with AzCopy](../../storage/common/storage-use-azcopy-v10.md).
-
-The following YAML shows the [Azure file copy task](/azure/devops/pipelines/tasks/deploy/azure-file-copy).
-
-```yml
-trigger:
-- master-
-pool:
- vmImage: 'windows-latest'
-
-steps:
-- task: AzureFileCopy@4
- inputs:
- SourcePath: 'templates'
- azureSubscription: 'copy-connection'
- Destination: 'AzureBlob'
- storage: 'demostorage'
- ContainerName: 'projecttemplates'
- name: AzureFileCopy
-```
-
-There are several parts of this task to revise for your environment. The `SourcePath` indicates the location of the artifacts relative to the pipeline file.
-
-```yaml
-SourcePath: '<path-to-artifacts>'
-```
-
-For `azureSubscription`, provide the name of the service connection you created.
-
-```yaml
-azureSubscription: '<your-connection-name>'
-```
-
-For storage and container name, provide the names of the storage account and container you want to use for storing the artifacts. The storage account must exist.
-
-```yaml
-storage: '<your-storage-account-name>'
-ContainerName: '<container-name>'
-```
-
-After creating the copy file task, you're ready to add the task to deploy the staged template.
-
-The following YAML shows the [Azure Resource Manager template deployment task](https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceManagerTemplateDeploymentV3/README.md):
-
-```yaml
-- task: AzureResourceManagerTemplateDeployment@3
- inputs:
- deploymentScope: 'Resource Group'
- azureResourceManagerConnection: 'copy-connection'
- subscriptionId: '00000000-0000-0000-0000-000000000000'
- action: 'Create Or Update Resource Group'
- resourceGroupName: 'demogroup'
- location: 'West US'
- templateLocation: 'URL of the file'
- csmFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.json$(AzureFileCopy.StorageContainerSasToken)'
- csmParametersFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.parameters.json$(AzureFileCopy.StorageContainerSasToken)'
- deploymentMode: 'Incremental'
- deploymentName: 'deploy1'
-```
-
-There are several parts of this task to review in greater detail.
-
-* `deploymentScope`: Select the scope of deployment from the options: `Management Group`, `Subscription`, and `Resource Group`.
-
-* `azureResourceManagerConnection`: Provide the name of the service connection you created.
-
-* `subscriptionId`: Provide the target subscription ID. This property only applies to the Resource Group deployment scope and the subscription deployment scope.
-
-* `resourceGroupName` and `location`: provide the name and location of the resource group you want to deploy to. The task creates the resource group if it doesn't exist.
-
- ```yml
- resourceGroupName: '<resource-group-name>'
- location: '<location>'
- ```
-
-* `csmFileLink`: Provide the link for the staged template. When setting the value, use variables returned from the file copy task. The following example links to a template named mainTemplate.json. The folder named **templates** is included because that where the file copy task copied the file to. In your pipeline, provide the path to your template and the name of your template.
-
- ```yml
- csmFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.json$(AzureFileCopy.StorageContainerSasToken)'
- ```
-
-Your pipeline look like:
-
-```yml
-trigger:
-- master-
-pool:
- vmImage: 'windows-latest'
-
-steps:
-- task: AzureFileCopy@4
- inputs:
- SourcePath: 'templates'
- azureSubscription: 'copy-connection'
- Destination: 'AzureBlob'
- storage: 'demostorage'
- ContainerName: 'projecttemplates'
- name: AzureFileCopy
-- task: AzureResourceManagerTemplateDeployment@3
- inputs:
- deploymentScope: 'Resource Group'
- azureResourceManagerConnection: 'copy-connection'
- subscriptionId: '00000000-0000-0000-0000-000000000000'
- action: 'Create Or Update Resource Group'
- resourceGroupName: 'demogroup'
- location: 'West US'
- templateLocation: 'URL of the file'
- csmFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.json$(AzureFileCopy.StorageContainerSasToken)'
- csmParametersFileLink: '$(AzureFileCopy.StorageContainerUri)templates/mainTemplate.parameters.json$(AzureFileCopy.StorageContainerSasToken)'
- deploymentMode: 'Incremental'
- deploymentName: 'deploy1'
-```
-
-When you select **Save**, the build pipeline is automatically run. Go back to the summary for your build pipeline, and watch the status.
-
-## Example
-
-The following pipeline shows how to build a Bicep file and how to deploy the compiled template:
+An Azure CLI task takes the following inputs:
+* `azureSubscription`, provide the name of the service connection that you created. See [Prepare your project](#prepare-your-project).
+* `scriptType`, use **bash**.
+* `scriptLocation`, use **inlineScript**, or **scriptPath**. If you specify **scriptPath**, you will also need to specify a `scriptPath` parameter.
+* `inlineScript`, specify your script lines. The script provided in the sample builds a bicep file called *azuredeploy.bicep* and exists in the root of the repository.
## Next steps
azure-resource-manager Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/operators.md
description: Describes the Bicep operators available for Azure Resource Manager
Previously updated : 06/01/2021 Last updated : 06/23/2021 # Bicep operators
The numeric operators use integers to do calculations and return integer values.
> Subtract and minus use the same operator. The functionality is different because subtract uses two > operands and minus uses one operand.
+## Operator precedence and associativity
+
+The operators below are listed in descending order of precedence (the higher the position the higher the precedence). Operators listed at the same level have equal precedence.
+
+| Symbol | Type of Operation | Associativity |
+|:-|:-|:-|
+| `(` `)` `[` `]` `.` `::` | Parentheses, array indexers, property accessors, and nested resource accessor | Left to right |
+| `!` `-` | Unary | Right to left |
+| `%` `*` `/` | Multiplicative | Left to right |
+| `+` `-` | Additive | Left to right |
+| `<=` `<` `>` `>=` | Relational | Left to right |
+| `==` `!=` `=~` `!~` | Equality | Left to right |
+| `&&` | Logical AND | Left to right |
+| `\|\|` | Logical OR | Left to right |
+| `?` `:` | Conditional expression (ternary) | Right to left
+| `??` | Coalesce | Left to right
+ ## Next steps - To create a Bicep file, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 06/09/2021 Last updated : 06/23/2021 # Azure subscription and service limits, quotas, and constraints
azure-sql Auto Failover Group Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-configure.md
Create your failover group and add your database to it using PowerShell.
# Create a failover group between the servers $failovergroup = Write-host "Creating a failover group between the primary and secondary server..." New-AzSqlDatabaseFailoverGroup `
- –ResourceGroupName $resourceGroupName `
+ ResourceGroupName $resourceGroupName `
-ServerName $serverName ` -PartnerServerName $drServerName `
- –FailoverGroupName $failoverGroupName `
- –FailoverPolicy Automatic `
+ FailoverGroupName $failoverGroupName `
+ FailoverPolicy Automatic `
-GracePeriodWithDataLossHours 2 $failovergroup
Create your failover group and add your elastic pool to it using PowerShell.
# Create a failover group between the servers Write-host "Creating failover group..." New-AzSqlDatabaseFailoverGroup `
- –ResourceGroupName $resourceGroupName `
+ ResourceGroupName $resourceGroupName `
-ServerName $serverName ` -PartnerServerName $drServerName `
- –FailoverGroupName $failoverGroupName `
- –FailoverPolicy Automatic `
+ FailoverGroupName $failoverGroupName `
+ FailoverPolicy Automatic `
-GracePeriodWithDataLossHours 2 Write-host "Failover group created successfully."
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Create your failover group using PowerShell.
# Create a failover group between the servers Write-host "Creating failover group..." New-AzSqlDatabaseFailoverGroup `
- –ResourceGroupName $resourceGroupName `
+ ResourceGroupName $resourceGroupName `
-ServerName $serverName ` -PartnerServerName $drServerName `
- –FailoverGroupName $failoverGroupName `
- –FailoverPolicy Automatic `
+ FailoverGroupName $failoverGroupName `
+ FailoverPolicy Automatic `
-GracePeriodWithDataLossHours 2 Write-host "Failover group created successfully."
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
GO
As in all other service tiers, Hyperscale guarantees data durability for committed transactions regardless of compute replica availability. The extent of downtime due to the primary replica becoming unavailable depends on the type of failover (planned vs. unplanned), and on the presence of at least one high-availability replica. In a planned failover (i.e. a maintenance event), the system either creates the new primary replica before initiating a failover, or uses an existing high-availability replica as the failover target. In an unplanned failover (i.e. a hardware failure on the primary replica), the system uses a high-availability replica as a failover target if one exists, or creates a new primary replica from the pool of available compute capacity. In the latter case, downtime duration is longer due to extra steps required to create the new primary replica.
-For Hyperscale SLA, see [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/sql-database/).
+For Hyperscale SLA, see [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database).
## Disaster recovery for Hyperscale databases
azure-sql Service Tiers Dtu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-dtu.md
Last updated 5/4/2021
# Service tiers in the DTU-based purchase model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Service tiers in the DTU-based purchase model are differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period for backups, and fixed price. All service tiers in the DTU-based purchase model provide flexibility of changing compute sizes with minimal [downtime](https://azure.microsoft.com/support/legal/sla/sql-database/v1_2/); however, there is a switch over period where connectivity is lost to the database for a short amount of time, which can be mitigated using retry logic. Single databases and elastic pools are billed hourly based on service tier and compute size.
+Service tiers in the DTU-based purchase model are differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period for backups, and fixed price. All service tiers in the DTU-based purchase model provide flexibility of changing compute sizes with minimal [downtime](https://azure.microsoft.com/support/legal/sla/azure-sql-database); however, there is a switch over period where connectivity is lost to the database for a short amount of time, which can be mitigated using retry logic. Single databases and elastic pools are billed hourly based on service tier and compute size.
> [!IMPORTANT] > [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) does not support a DTU-based purchasing model.
azure-sql Transparent Data Encryption Byok Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-configure.md
For adding permissions to your server on a Managed HSM, add the 'Managed HSM Cry
> The combined length for the key vault name and key name cannot exceed 94 characters. > [!TIP]
-> An example KeyId from Key Vault:<br/>https://contosokeyvault.vault.azure.net/keys/Key1/1a1a2b2b3c3c4d4d5e5e6f6f7g7g8h8h
+> An example KeyId from Key Vault: `https://contosokeyvault.vault.azure.net/keys/Key1/1a1a2b2b3c3c4d4d5e5e6f6f7g7g8h8h`
> > An example KeyId from Managed HSM:<br/>https://contosoMHSM.managedhsm.azure.net/keys/myrsakey
az keyvault set-policy --name <kvname> --object-id <objectid> --resource-group
``` > [!TIP]
-> Keep the key URI or keyID of the new key for the next step, for example: https://contosokeyvault.vault.azure.net/keys/Key1/1a1a2b2b3c3c4d4d5e5e6f6f7g7g8h8h
+> Keep the key URI or keyID of the new key for the next step, for example: `https://contosokeyvault.vault.azure.net/keys/Key1/1a1a2b2b3c3c4d4d5e5e6f6f7g7g8h8h`
## Add the Key Vault key to the server and set the TDE Protector
azure-sql Hadr Cluster Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/hadr-cluster-best-practices.md
If tuning your cluster heartbeat and threshold settings as recommended is insuff
Start by increase the following parameters from their default values for relaxed monitoring, and adjust as necessary:
-|Parameter |Default value |Description |
-||||
-|**Healthcheck timeout**|60000 |Determines health of the primary replica or node. The cluster resource DLL sp_server_diagnostics returns results at an interval that equals 1/3 of the health-check timeout threshold. If sp_server_diagnostics is slow or is not returning information, the resource DLL will wait for the full interval of the health-check timeout threshold before determining that the resource is unresponsive, and initiating an automatic failover, if configured to do so. |
-|**Failure-Condition Level** | 2 | Conditions that trigger an automatic failover. There are five failure-condition levels, which range from the least restrictive (level one) to the most restrictive (level five) |
+|Parameter |Default value |Relaxed Value |Description |
+|||||
+|**Healthcheck timeout**|30000 |60000 |Determines health of the primary replica or node. The cluster resource DLL sp_server_diagnostics returns results at an interval that equals 1/3 of the health-check timeout threshold. If sp_server_diagnostics is slow or is not returning information, the resource DLL will wait for the full interval of the health-check timeout threshold before determining that the resource is unresponsive, and initiating an automatic failover, if configured to do so. |
+|**Failure-Condition Level** | 3 | 2 |Conditions that trigger an automatic failover. There are five failure-condition levels, which range from the least restrictive (level one) to the most restrictive (level five) |
Use Transact-SQL (T-SQL) to modify the health check and failure conditions for both AGs and FCIs.
ALTER SERVER CONFIGURATION SET FAILOVER CLUSTER PROPERTY FailureConditionLevel =
Specific to **availability groups**, start with the following recommended parameters, and adjust as necessary:
-|Parameter |Default value |Description |
-||||
-|**Lease timeout**|40000|Prevents split-brain. |
-|**Session timeout**|20 |Checks communication issues between replicas. The session-timeout period is a replica property that controls how long (in seconds) that an availability replica waits for a ping response from a connected replica before considering the connection to have failed. By default, a replica waits 10 seconds for a ping response. This replica property applies to only the connection between a given secondary replica and the primary replica of the availability group. |
-| **Max failures in specified period** | 6 | Used to avoid indefinite movement of a clustered resource within multiple node failures. Too low of a value can lead to the availability group being in a failed state. Increase the value to prevent short disruptions from performance issues as too low a value can lead to the AG being in a failed state. |
+|Parameter |Default value |Relaxed Value |Description |
+|||||
+|**Lease timeout**|20000|40000|Prevents split-brain. |
+|**Session timeout**|10000 |20000|Checks communication issues between replicas. The session-timeout period is a replica property that controls how long (in seconds) that an availability replica waits for a ping response from a connected replica before considering the connection to have failed. By default, a replica waits 10 seconds for a ping response. This replica property applies to only the connection between a given secondary replica and the primary replica of the availability group. |
+| **Max failures in specified period** | 2 | 6 |Used to avoid indefinite movement of a clustered resource within multiple node failures. Too low of a value can lead to the availability group being in a failed state. Increase the value to prevent short disruptions from performance issues as too low a value can lead to the AG being in a failed state. |
Before making any changes, consider the following: - Do not lower any timeout values below their default values.
azure-video-analyzer Computer Vision For Spatial Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/computer-vision-for-spatial-analysis.md
The following are prerequisites for connecting the spatial-analysis module to Az
## Set up Azure resources
-1. To run the Spatial Analysis container, you need a compute device with a [NVIDIA Tesla T4 GPU](https://www.nvidia.com/data-center/tesla-t4/). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that has [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) installed on the host computer.
+1. To run the Spatial Analysis container, you need a compute device with a [NVIDIA Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that has [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) installed on the host computer.
#### [Azure Stack Edge device](#tab/azure-stack-edge)
The spatialanalysis is a large container and its startup time can take up to 30
Try different operations that the `spatialAnalysis` module offers, please refer to the following pipelineTopologies: - [personCount](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-count-operation-topology.json)-- [personDistance](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-distance-pperation-topology.json)
+- [personDistance](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-distance-operation-topology.json)
- [personCrossingLine](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-line-crossing-operation-topology.json) - [personZoneCrossing](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-zone-crossing-operation-topology.json) - [customOperation](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/custom-operation-topology.json)
azure-video-analyzer Event Based Video Recording Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/event-based-video-recording-concept.md
Events from the motion detector node also trigger the signal gate processor node
### Video recording based on events from other sources
-In this use case, signals from another IoT sensor can be used to trigger recording of video. The diagram below shows a graphical representation of a pipeline that addresses this use case. The JSON representation of the pipeline topology of such a pipeline can be found [here](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/evr-hubMessage-files/topology.json).
+In this use case, signals from another IoT sensor can be used to trigger recording of video. The diagram below shows a graphical representation of a pipeline that addresses this use case. The JSON representation of the pipeline topology of such a pipeline can be found [here](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/evr-hubMessage-file-sink/topology.json).
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/event-based-video-recording/other-sources.png" alt-text="Event-based recording of live video when signaled by an external source.":::
azure-video-analyzer Player Widget https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/player-widget.md
We did a simple configuration for the player above, but it supports a wider rang
### Alternate ways to load the code into your application
-The package used to get the code into your application is an NPM package [here](https://www.npmjs.com/package/video-analyzer-widgets). While in the above example the latest version was loaded at run time directly from the repository, you can also download and install the package locally using:
+The package used to get the code into your application is an NPM package [here](https://www.npmjs.com/package/@azure/video-analyzer-widgets). While in the above example the latest version was loaded at run time directly from the repository, you can also download and install the package locally using:
```bash npm install @azure/video-analyzer/widgets
document.firstElementChild.appendChild(avaPlayer);
## Next steps
-* Learn more about the [widget API](https://github.com/Azure/video-analyzer/widgets)
+* Learn more about the [widget API](https://github.com/Azure/video-analyzer/tree/main/widgets)
azure-video-analyzer Record Event Based Live Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-event-based-live-video.md
In about 30 seconds, refresh Azure IoT Hub in the lower-left section in Visual S
1. Next, under the **livePipelineSet** and **pipelineTopologyDelete** nodes, ensure that the value of **topologyName** matches the value of the **name** property in the above pipeline topology: `"pipelineTopologyName" : "EVRtoVideosOnObjDetect"`
-1. Open the [pipeline topology](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/evr-hubMessage-videos/topology.json) in a browser, and look at videoName - it is hard-coded to `sample-evr-video`. This is acceptable for a tutorial. In production, you would take care to ensure that each unique RTSP camera is recorded to a video resource with a unique name.
+1. Open the [pipeline topology](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/evr-hubMessage-video-sink/topology.json) in a browser, and look at videoName - it is hard-coded to `sample-evr-video`. This is acceptable for a tutorial. In production, you would take care to ensure that each unique RTSP camera is recorded to a video resource with a unique name.
1. Start a debugging session by selecting F5. You'll see some messages printed in the **TERMINAL** window. 1. The operations.json file starts off with calls to pipelineTopologyList and livePipelineList. If you've cleaned up resources after previous quickstarts or tutorials, this action returns empty lists and then pauses for you to select **Enter**, as shown: ```
azure-video-analyzer Record Stream Inference Data With Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-stream-inference-data-with-video.md
Next, browse to the src/cloud-to-device-console-app folder. Here you'll see the
1. Next, under the **livePipelineSet** and **pipelineTopologyDelete** nodes, ensure that the value of **topologyName** matches the value of the **name** property in the above pipeline topology: `"pipelineTopologyName" : "CVRHttpExtensionObjectTracking"`
-1. Open the [pipeline topology](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/cvr-with-httpExtension-objTracking/topology.json) in a browser, and look at videoName - it is hard-coded to `sample-cvr-with-inference-metadata`. This is acceptable for a tutorial. In production, you would take care to ensure that each unique RTSP camera is recorded to a video resource with a unique name.
+1. Open the [pipeline topology](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/cvr-with-httpExtension-and-objectTracking/topology.json) in a browser, and look at videoName - it is hard-coded to `sample-cvr-with-inference-metadata`. This is acceptable for a tutorial. In production, you would take care to ensure that each unique RTSP camera is recorded to a video resource with a unique name.
1. Examine the settings for the HTTP extension node.
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
Azure VMware Solution delivers VMware-based private clouds in Azure. The private
A private cloud includes clusters with: -- Dedicated bare-metal server nodes provisioned with VMware ESXi hypervisor
+- Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor
- vCenter Server for managing ESXi and vSAN - VMware NSX-T software-defined networking for vSphere workload VMs - VMware vSAN datastore for vSphere workload VMs
backup Backup Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-architecture.md
Title: Architecture Overview description: Provides an overview of the architecture, components, and processes used by the Azure Backup service. Previously updated : 02/19/2019 Last updated : 06/23/2021 # Azure Backup architecture and components
Storage consumption, recovery time objective (RTO), and network consumption vari
![Image showing comparisons of backup methods](./media/backup-architecture/backup-method-comparison.png)
+## SAP HANA backup types
+
+The following table explains the different types of backups used for SAP HANA databases and how often they're used:
+
+| Backup type | Details | Usage |
+| | | |
+| **Full backup** | A full database backup backs up the entire database. This type of backup can be independently used to restore to a specific point. | At most, you can schedule one full backup per day. <br><br> You can choose to schedule a full backup on a daily or weekly interval. |
+| **Differential backup** | A differential backup is based on the most recent, previous full-data backup. <br><br> It captures only the data that's changed since the previous full backup. | At most, you can schedule one differential backup per day. <br><br> You can't configure a full backup and a differential backup on the same day. |
+| **Incremental backup** | An incremental backup is based on the most recent, previous full/ differential/ incremental-data backup. <br><br> It captures only the data that's changed since this previous data backup. | At most, you can schedule one incremental backup per day. <br><br> You can't schedule both differential and incremental backups on a database, only one delta backup type can be scheduled. <br><br> You can't configure a full backup and a differential backup on the same day. |k
+| **Transaction log backup** | A log backup enables point-in-time restoration up to a specific second. | At most, you can configure transactional log backups every 15 minutes. |
+ ## Backup features The following table summarizes the supported features for the different types of backup:
backup Backup Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-cli.md
In this article, you'll learn how to:
- Run an on-demand backup job
-For information on the Azure Blobs regions availability, supported scenarios, and limitations, see the [support matrix](disk-backup-support-matrix.md).
+For information on the Azure Blobs regions availability, supported scenarios, and limitations, see the [support matrix](blob-backup-support-matrix.md).
## Create a Backup vault
The policy template consists of a lifecycle only (which decides when to delete/c
Once the policy JSON has all the required values, proceed to create a new policy from the policy object using the [az dataprotection backup-policy create](/cli/azure/dataprotection/backup-policy?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_policy_create) command. ```azurecli-interactive
-az dataprotection backup-policy get-default-policy-template --datasource-type AzureDisk > policy.json
+az dataprotection backup-policy get-default-policy-template --datasource-type AzureBlob > policy.json
az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVault -n BlobBackup-Policy --policy policy.json {
You need to assign a few permissions via RBAC to vault (represented by vault MSI
Once all the relevant permissions are set, the configuration of backup is performed in 2 steps. First, we prepare the relevant request by using the relevant vault, policy, storage account using the [az dataprotection backup-instance initialize](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_initialize) command. Then, we submit the request to protect the disk using the [az dataprotection backup-instance create](/cli/azure/dataprotection/backup-instance?view=azure-cli-latest&preserve-view=true#az_dataprotection_backup_instance_create) command. ```azurecli-interactive
-az dataprotection backup-instance initialize --datasource-type AzureBlob -l southeastasia --policy-id "subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/BlobBackup-Policy" --datasource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA" > backup_instance.json
+az dataprotection backup-instance initialize --datasource-type AzureBlob -l southeastasia --policy-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/BlobBackup-Policy" --datasource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA" > backup_instance.json
``` ```azurecli-interactive
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
[Spot VMs](../virtual-machines/spot-vms.md) | Unsupported. Azure Backup restores Spot VMs as regular Azure VMs. [Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) | Supported<br></br>While restoring an Azure VM through the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, though the restore gets successful, Azure VM can't be restored in the dedicated host. To achieve this, we recommend you to restore as disks. While [restoring as disks](backup-azure-arm-restore-vms.md#restore-disks) with the template, create a VM in dedicated host, and then attach the disks.<br></br>This is not applicable in secondary region, while performing [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore). Windows Storage Spaces configuration of standalone Azure VMs | Supported
-[Azure VM Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for both uniform and flexible orchestration models to back up and restore Single Azure VM.
+[Azure VM Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM.
## VM storage support
Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB
Storage type | Standard HDD, Standard SSD, Premium SSD. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup.
-Disks with Write Accelerator enabled | Currently, Azure VM with WA disk backup is previewed in all Azure public regions. <br><br> (Quota is exceeded and no further whitelisting is possible until GA) <br><br> Snapshots donΓÇÖt include WA disk snapshots for unsupported subscriptions as WA disk will be excluded.
+Disks with Write Accelerator enabled | Currently, Azure VM with WA disk backup is previewed in all Azure public regions. <br><br> (Quota is exceeded and no further change to the approved list is possible until GA) <br><br> Snapshots donΓÇÖt include WA disk snapshots for unsupported subscriptions as WA disk will be excluded.
Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported. Resize disk on protected VM | Supported.
backup Powershell Backup Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/powershell-backup-samples.md
Title: PowerShell Samples description: This article provides links to PowerShell script samples that use Azure Backup to back up and restore data. Previously updated : 01/31/2019 Last updated : 06/23/2021 # Azure Backup PowerShell samples
The following table links to PowerShell script samples that use Azure Backup to
| [Find Registered Storage Account](./scripts/backup-powershell-script-find-recovery-services-vault.md) | Find the Recovery Services vault where the storage account is registered | | [Disable Soft delete for File Shares in a Storage Account](./scripts/disable-soft-delete-for-file-shares.md) | Disable Soft delete for File Shares in a Storage Account| | [Undelete accidentally deleted File share](./scripts/backup-powershell-script-undelete-file-share.md) | Undelete accidentally deleted File share |
+| [Install the latest MARS agent](./scripts/install-latest-microsoft-azure-recovery-services-agent.md) | Install the latest MARS agent on your on-premises Windows server. |
+| [Register MARS agent](./scripts/register-microsoft-azure-recovery-services-agent.md) | Register your on-premises Windows server or client machine with a Recovery Services vault. |
+| [Set file and folder backup policy for on-premises Windows server.](./scripts/set-file-folder-backup-policy.md) | Create a new policy or modify the current file and folder backup policy. |
+| [Set system state backup policy for on-premises Windows server.](./scripts/set-system-state-backup-policy.md) | Create a new backup policy or modify the current system state backup policy. |
+| [Configure backup for on-premises Windows server.](./scripts/microsoft-azure-recovery-services-powershell-all.md) | Configure backup for your on-premises Windows server. |
+
backup Install Latest Microsoft Azure Recovery Services Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/install-latest-microsoft-azure-recovery-services-agent.md
+
+ Title: Script Sample - Install the latest MARS agent on on-premises Windows servers
+description: Learn how to use a script to install the latest MARS agent on your on-premises Windows servers in a storage account.
+ Last updated : 06/23/2021++
+# PowerShell Script to install the latest MARS agent on an on-premises Windows server
+
+This script helps you to install the latest MARS agent on your on-premises Windows server.
+
+## Sample script
+
+```azurepowershell
+<#
+
+.SYNOPSIS
+Sets MARS agent
+
+.DESCRIPTION
+Sets MARS agent
+
+.ROLE
+Administrators
+
+#>
+Set-StrictMode -Version 5.0
+$ErrorActionPreference = "Stop"
+Try {
+ $agentPath = $env:TEMP + '\MARSAgentInstaller.exe'
+ Invoke-WebRequest -Uri 'https://aka.ms/azurebackup_agent' -OutFile $agentPath
+ & $agentPath /q | out-null
+
+ $env:PSModulePath = (Get-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Session Manager\Environment' -Name PSModulePath).PSModulePath
+ $azureBackupModuleName = 'MSOnlineBackup'
+ $azureBackupModule = Get-Module -ListAvailable -Name $azureBackupModuleName
+ if ($azureBackupModule) {
+ $true
+ }
+ else {
+ $false
+ }
+}
+Catch {
+ if ($error[0].ErrorDetails) {
+ throw $error[0].ErrorDetails
+ }
+ throw $error[0]
+}
+
+```
+
+## Next steps
+
+[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent.
backup Microsoft Azure Recovery Services Powershell All https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/microsoft-azure-recovery-services-powershell-all.md
+
+ Title: Script Sample - Configuring Backup for on-premises Windows server
+description: Learn how to use a script to configure Backup for on-premises Windows server.
+ Last updated : 06/23/2021++
+# PowerShell Script to configure Backup for on-premises Windows server
+
+This script helps you to configure Backup for your on-premises Windows server, right from creating a vault to configuring MARS agent and policy.
+
+## Sample
+
+```azurepowershell
+# Create Recovery Services Vault (RSV)
+Register-AzResourceProvider -ProviderNamespace "Microsoft.RecoveryServices"
+New-AzResourceGroup ΓÇôName "test-rg" ΓÇôLocation "WestUS"
+New-AzRecoveryServicesVault -Name "testvault" -ResourceGroupName " test-rg" -Location "WestUS"
+$Vault1 = Get-AzRecoveryServicesVault ΓÇôName "testVault"
+Set-AzRecoveryServicesBackupProperties -Vault $Vault1 -BackupStorageRedundancy GeoRedundant
+
+Get-AzRecoveryServicesVault
+
+# Installing the Azure Backup agent
+$MarsAURL = 'https://aka.ms/Azurebackup_Agent'
+$WC = New-Object System.Net.WebClient
+$WC.DownloadFile($MarsAURL,'C:\downloads\MARSAgentInstaller.EXE')
+C:\Downloads\MARSAgentInstaller.EXE /q
+
+MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://docs.microsoft.com/en-us/azure/backup/backup-client-automation#installation-options
+
+# Registering Windows Server or Windows client machine to a Recovery Services Vault
+$CredsPath = "C:\downloads"
+$CredsFilename = Get-AzRecoveryServicesVaultSettingsFile -Backup -Vault $Vault1 -Path $CredsPath
+$dt = $(Get-Date).ToString("M-d-yyyy")
+$cert = New-SelfSignedCertificate -CertStoreLocation Cert:\CurrentUser\My -FriendlyName 'test-vaultcredentials' -subject "Windows Azure Tools" -KeyExportPolicy Exportable -NotAfter $(Get-Date).AddHours(48) -NotBefore $(Get-Date).AddHours(-24) -KeyProtection None -KeyUsage None -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2") -Provider "Microsoft Enhanced Cryptographic Provider v1.0"
+$certficate = [convert]::ToBase64String($cert.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pfx))
+$CredsFilename = Get-AzRecoveryServicesVaultSettingsFile -Backup -Vault $Vault -Path $CredsPath -Certificate $certficate
+$Env:PSModulePath += ';C:\Program Files\Microsoft Azure Recovery Services Agent\bin\Modules'
+Import-Module -Name 'C:\Program Files\Microsoft Azure Recovery Services Agent\bin\Modules\MSOnlineBackup'
+Start-OBRegistration -VaultCredentials $CredsFilename.FilePath -Confirm:$false
+
+# Networking settings
+Set-OBMachineSetting -NoProxy
+Set-OBMachineSetting -NoThrottle
+
+# Encryption settings
+$PassPhrase = ConvertTo-SecureString -String "Complex!123_STRING" -AsPlainText -Force
+Set-OBMachineSetting -EncryptionPassPhrase $PassPhrase -SecurityPin "<generatedPIN>" #NOTE: You must generate a security pin by selecting Generate, under Settings > Properties > Security PIN in the Recovery Services vault section of the Azure portal.
+# See: https://docs.microsoft.com/en-us/rest/api/backup/securitypins/get
+# See: https://docs.microsoft.com/en-us/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0
+
+# Back up files and folders
+$NewPolicy = New-OBPolicy
+$Schedule = New-OBSchedule -DaysOfWeek Saturday, Sunday -TimesOfDay 16:00
+Set-OBSchedule -Policy $NewPolicy -Schedule $Schedule
+
+# Configuring a retention policy
+$RetentionPolicy = New-OBRetentionPolicy -RetentionDays 7
+Set-OBRetentionPolicy -Policy $NewPolicy -RetentionPolicy $RetentionPolicy
+
+# Including and excluding files to be backed up
+$Inclusions = New-OBFileSpec -FileSpec @("C:\", "D:\")
+$Exclusions = New-OBFileSpec -FileSpec @("C:\windows", "C:\temp") -Exclude
+Add-OBFileSpec -Policy $NewPolicy -FileSpec $Inclusions
+Add-OBFileSpec -Policy $NewPolicy -FileSpec $Exclusions
+
+# Applying the policy
+Get-OBPolicy | Remove-OBPolicy
+Set-OBPolicy -Policy $NewPolicy
+Get-OBPolicy | Get-OBSchedule
+Get-OBPolicy | Get-OBRetentionPolicy
+Get-OBPolicy | Get-OBFileSpec
+
+# Performing an on-demand backup
+Get-OBPolicy | Start-OBBackup
+
+# Remote management
+Get-Service -Name WinRM
+Enable-PSRemoting -Force
+Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force
+
+```
+
+## Next steps
+
+[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent.
backup Register Microsoft Azure Recovery Services Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/register-microsoft-azure-recovery-services-agent.md
+
+ Title: Script Sample - Register an on-premises Windows server or client machine with a Recovery Services vault
+description: Learn about how to use a script to registering an on-premises Windows Server or client machine with a Recovery Services vault.
+ Last updated : 06/23/2021++
+# PowerShell Script to register an on-premises Windows server or a client machine with Recovery Services vault
+
+This script helps you to register your on-premises Windows server or client machine with a Recovery Services vault.
+
+## Sample script
+
+```azurepowershell
+<#
+
+.SYNOPSIS
+Registers MARS agent
+
+.DESCRIPTION
+Registers MARS agent
+
+.ROLE
+Administrators
+
+#>
+param (
+ [Parameter(Mandatory = $true)]
+ [String]
+ $vaultcredPath,
+ [Parameter(Mandatory = $true)]
+ [String]
+ $passphrase
+)
+Set-StrictMode -Version 5.0
+$env:PSModulePath = (Get-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Session Manager\Environment' -Name PSModulePath).PSModulePath
+Import-Module MSOnlineBackup
+$ErrorActionPreference = "Stop"
+Try {
+ $date = Get-Date
+ Start-OBRegistration -VaultCredentials $vaultcredPath -Confirm:$false
+ $securePassphrase = ConvertTo-SecureString -String $passphrase -AsPlainText -Force
+ Set-OBMachineSetting -EncryptionPassphrase $securePassphrase -SecurityPIN " "
+}
+Catch {
+ if ($error[0].ErrorDetails) {
+ throw $error[0].ErrorDetails
+ }
+ throw $error[0]
+}
+
+```
+
+## How to execute the script
+
+1. Save the above script on your machine with a name of your choice and .ps1 extension.
+1. Execute the script by providing the following parameters:
+ - ΓÇô vaultcredPath -Complete Path of downloaded vault credential file
+ - ΓÇô passphrase- Plain text string converted into secure string using [ConvertTo-SecureString](/powershell/module/microsoft.powershell.security/convertto-securestring?view=powershell-7.1&preserve-view=true) cmdlet.
+
+>[!Note]
+>You also need to provide the Security PIN generated from the Azure portal. To generate the PIN, navigate to **Settings** -> **Properties** -> **Security PIN** in the Recovery Services vault blade, and then select **Generate**.
+
+## Next steps
+
+[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent.
+
backup Set File Folder Backup Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/set-file-folder-backup-policy.md
+
+ Title: Script Sample - Create a new or modify the current file and folder backup policy
+description: Learn about how to use a script to create a new policy or modify the current file and folder Backup policy.
+ Last updated : 06/23/2021++
+# PowerShell Script to create a new or modify the current file and folder backup policy
+
+This script helps you to create a new backup policy or modify the current file and folder backup policy set for the server protected by the MARS agent.
+
+## Sample script
+
+```azurepowershell
+<#
+
+.SYNOPSIS
+ Modify file folder policy
+
+.DESCRIPTION
+ Modify file folder policy
+
+.ROLE
+Administrators
+
+#>
+param (
+ [Parameter(Mandatory = $true)]
+ [string[]]
+ $filePath,
+ [Parameter(Mandatory = $true)]
+ [string[]]
+ $daysOfWeek,
+ [Parameter(Mandatory = $true)]
+ [string[]]
+ $timesOfDay,
+ [Parameter(Mandatory = $true)]
+ [int]
+ $weeklyFrequency,
+
+ [Parameter(Mandatory = $false)]
+ [int]
+ $retentionDays,
+
+ [Parameter(Mandatory = $false)]
+ [Boolean]
+ $retentionWeeklyPolicy,
+ [Parameter(Mandatory = $false)]
+ [int]
+ $retentionWeeks,
+
+ [Parameter(Mandatory = $false)]
+ [Boolean]
+ $retentionMonthlyPolicy,
+ [Parameter(Mandatory = $false)]
+ [int]
+ $retentionMonths,
+
+ [Parameter(Mandatory = $false)]
+ [Boolean]
+ $retentionYearlyPolicy,
+ [Parameter(Mandatory = $false)]
+ [int]
+ $retentionYears
+)
+Set-StrictMode -Version 5.0
+$env:PSModulePath = (Get-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Session Manager\Environment' -Name PSModulePath).PSModulePath
+Import-Module MSOnlineBackup
+$ErrorActionPreference = "Stop"
+Try {
+ $timesOfDaySchedule = @()
+ foreach ($time in $timesOfDay) {
+ $timesOfDaySchedule += ([TimeSpan]$time)
+ }
+ $daysOfWeekSchedule = @()
+ foreach ($day in $daysOfWeek) {
+ $daysOfWeekSchedule += ([System.DayOfWeek]$day)
+ }
+
+ $schedule = New-OBSchedule -DaysOfWeek $daysOfWeekSchedule -TimesOfDay $timesOfDaySchedule -WeeklyFrequency $weeklyFrequency
+ if ($daysOfWeekSchedule.Count -eq 7) {
+ if ($retentionWeeklyPolicy -and $retentionMonthlyPolicy -and $retentionYearlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths -RetentionYearlyPolicy:$true -YearDaysOfWeek $daysOfWeekSchedule -YearTimesOfDay $timesOfDaySchedule -RetentionYears $retentionYears
+ }
+ elseif ($retentionWeeklyPolicy -and $retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths
+ }
+ elseif ($retentionWeeklyPolicy -and $retentionYearlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks -RetentionYearlyPolicy:$true -YearDaysOfWeek $daysOfWeekSchedule -YearTimesOfDay $timesOfDaySchedule -RetentionYears $retentionYears
+ }
+ elseif ($retentionYearlyPolicy -and $retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths -RetentionYearlyPolicy:$true -YearDaysOfWeek $daysOfWeekSchedule -YearTimesOfDay $timesOfDaySchedule -RetentionYears $retentionYears
+ }
+ elseif ($retentionWeeklyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks
+ }
+ elseif ($retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths
+ }
+ elseif ($retentionYearlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionYearlyPolicy:$true -YearDaysOfWeek $daysOfWeekSchedule -YearTimesOfDay $timesOfDaySchedule -RetentionYears $retentionYears
+ }
+ else {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays
+ }
+ }
+ else {
+ if ($retentionWeeklyPolicy -and $retentionMonthlyPolicy -and $retentionYearlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths -RetentionYearlyPolicy:$true -YearDaysOfWeek $daysOfWeekSchedule -YearTimesOfDay $timesOfDaySchedule -RetentionYears $retentionYears
+ }
+ elseif ($retentionWeeklyPolicy -and $retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths
+ }
+ elseif ($retentionWeeklyPolicy -and $retentionYearlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks -RetentionYearlyPolicy:$true -YearDaysOfWeek $daysOfWeekSchedule -YearTimesOfDay $timesOfDaySchedule -RetentionYears $retentionYears
+ }
+ elseif ($retentionYearlyPolicy -and $retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths -RetentionYearlyPolicy:$true -YearDaysOfWeek $daysOfWeekSchedule -YearTimesOfDay $timesOfDaySchedule -RetentionYears $retentionYears
+ }
+ elseif ($retentionWeeklyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks
+ }
+ elseif ($retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths
+ }
+ elseif ($retentionYearlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionYearlyPolicy:$true -YearDaysOfWeek $daysOfWeekSchedule -YearTimesOfDay $timesOfDaySchedule -RetentionYears $retentionYears
+ }
+ }
+
+ $oldPolicy = Get-OBPolicy
+ if ($oldPolicy) {
+ $ospec = Get-OBFileSpec $oldPolicy
+
+ $p = Remove-OBFileSpec -FileSpec $ospec -Policy $oldPolicy -Confirm:$false
+
+ $fileSpec = New-OBFileSpec -FileSpec $filePath
+
+ Add-OBFileSpec -Policy $p -FileSpec $fileSpec -Confirm:$false
+ Set-OBSchedule -Policy $p -Schedule $schedule -Confirm:$false
+ Set-OBRetentionPolicy -Policy $p -RetentionPolicy $retention -Confirm:$false
+ Set-OBPolicy -Policy $p -Confirm:$false
+ $p
+ }
+ else {
+ $policy = New-OBPolicy
+ $fileSpec = New-OBFileSpec -FileSpec $filePath
+ Add-OBFileSpec -Policy $policy -FileSpec $fileSpec
+ Set-OBSchedule -Policy $policy -Schedule $schedule
+ Set-OBRetentionPolicy -Policy $policy -RetentionPolicy $retention
+ Set-OBPolicy -Policy $policy -Confirm:$false
+ }
+}
+Catch {
+ if ($error[0].ErrorDetails) {
+ throw $error[0].ErrorDetails
+ }
+ throw $error[0]
+}
+
+```
+
+## How to execute the script
+
+1. Save the above script on your machine with a name of your choice and .ps1 extension.
+1. Execute the script by providing the following parameters:
+ - Schedule of backup and number of days/weeks/months or years that the backup needs to be retained.
+ - -filePath- Files and folders that should be included or excluded from backup.
+
+## Next steps
+
+[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent.
backup Set System State Backup Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/set-system-state-backup-policy.md
+
+ Title: Script Sample - Create a new or modify the current system state backup policy
+description: Learn about how to use a script to create a new or modify the current system state backup policy.
+ Last updated : 06/23/2021++
+# PowerShell Script to create a new or modify the current system state backup policy
+
+This script helps you to create new backup policy or modify the current system state backup policy set for the server protected by MARS agent.
+
+## Sample script
+
+```azurepowershell
+<#
+
+.SYNOPSIS
+Modify system state policy
+
+.DESCRIPTION
+Modify system state policy
+
+.ROLE
+Administrators
+
+#>
+param (
+ [Parameter(Mandatory = $true)]
+ [string[]]
+ $daysOfWeek,
+ [Parameter(Mandatory = $true)]
+ [string[]]
+ $timesOfDay,
+ [Parameter(Mandatory = $true)]
+ [int]
+ $weeklyFrequency,
+
+ [Parameter(Mandatory = $false)]
+ [int]
+ $retentionDays,
+
+ [Parameter(Mandatory = $false)]
+ [Boolean]
+ $retentionWeeklyPolicy,
+ [Parameter(Mandatory = $false)]
+ [int]
+ $retentionWeeks,
+
+ [Parameter(Mandatory = $false)]
+ [Boolean]
+ $retentionMonthlyPolicy,
+ [Parameter(Mandatory = $false)]
+ [int]
+ $retentionMonths
+)
+Set-StrictMode -Version 5.0
+$env:PSModulePath = (Get-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Session Manager\Environment' -Name PSModulePath).PSModulePath
+Import-Module MSOnlineBackup
+$ErrorActionPreference = "Stop"
+Try {
+ $oldPolicy = Get-OBSystemStatePolicy
+ if ($oldPolicy) {
+ return
+ }
+ $policy = New-OBPolicy
+ $policy = Add-OBSystemState -Policy $policy
+
+ $timesOfDaySchedule = @()
+ foreach ($time in $timesOfDay) {
+ $timesOfDaySchedule += ([TimeSpan]$time)
+ }
+ $daysOfWeekSchedule = @()
+ foreach ($day in $daysOfWeek) {
+ $daysOfWeekSchedule += ([System.DayOfWeek]$day)
+ }
+
+ $schedule = New-OBSchedule -DaysOfWeek $daysOfWeekSchedule -TimesOfDay $timesOfDaySchedule -WeeklyFrequency $weeklyFrequency
+ if ($daysOfWeekSchedule.Count -eq 7) {
+ if ($retentionWeeklyPolicy -and $retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths
+ }
+ elseif ($retentionWeeklyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks
+ }
+ elseif ($retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths
+ }
+ else {
+ $retention = New-OBRetentionPolicy -RetentionDays $retentionDays
+ }
+ }
+ else {
+ if ($retentionWeeklyPolicy -and $retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths
+ }
+ elseif ($retentionWeeklyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionWeeklyPolicy:$true -WeekDaysOfWeek $daysOfWeekSchedule -WeekTimesOfDay $timesOfDaySchedule -RetentionWeeks $retentionWeeks
+ }
+ elseif ($retentionMonthlyPolicy) {
+ $retention = New-OBRetentionPolicy -RetentionMonthlyPolicy:$true -MonthDaysOfWeek $daysOfWeekSchedule -MonthTimesOfDay $timesOfDaySchedule -RetentionMonths $retentionMonths
+ }
+ }
+ Set-OBSchedule -Policy $policy -Schedule $schedule
+ Set-OBRetentionPolicy -Policy $policy -RetentionPolicy $retention
+ Set-OBSystemStatePolicy -Policy $policy -Confirm:$false
+}
+Catch {
+ if ($error[0].ErrorDetails) {
+ throw $error[0].ErrorDetails
+ }
+ throw $error[0]
+}
+
+```
+
+## How to execute the script
+
+1. Save the above script on your machine with a name of your choice and .ps1 extension.
+1. Execute the script by providing the following parameters: <br> Schedule of backup and number of days/weeks/months or years that the backup needs to be retained.
+
+## Next steps
+
+[Learn more](../backup-client-automation.md) about how to use PowerShell to deploy and manage on-premises backups using MARS agent.
cognitive-services Translator Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/containers/translator-container-configuration.md
recommendations: false
-# Configure Translator Docker containers (Preview)
+# Configure Translator Docker containers (preview)
Cognitive Services provides each container with a common configuration framework. You can easily configure your Translator containers and you to build Translator application architecture optimized for robust cloud capabilities and edge locality.
cognitive-services Translator Container Supported Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/containers/translator-container-supported-parameters.md
Last updated 05/12/2021
-# Container: Translator translate method
+# Container: Translator translate method (preview)
Translate text.
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
recommendations: false
keywords: on-premises, Docker, container, identify
-# Install and run Translator containers (Preview)
+# Install and run Translator containers (preview)
- Containers enable you to run some features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container.
+ Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container.
Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality.
cognitive-services Client Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/client-sdks.md
+
+ Title: "Document Translation C#/.NET or Python client library"
+
+description: Use the Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
++++++ Last updated : 06/22/2021+++
+# Document Translation client libraries and SDKs
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD001 -->
+[Document Translation](overview.md) is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service. You can translate entire documents or process batch document translations in various file formats while preserving original document structure and format. In this article, you'll learn how to use the Document Translation service C#/.NET and Python client libraries. For the REST API, see our [Quickstart](get-started-with-document-translation.md) guide.
+
+## Prerequisites
+
+To get started, you'll need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**single-service Translator resource**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource).
+
+* An [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll [**create containers**](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) in your Azure blob storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files will be stored (required).
+
+* You also need to create Shared Access Signature (SAS) tokens for your source and target containers. The `sourceUrl` and `targetUrl` , must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](create-sas-tokens.md).
+
+ * Your **source** container or blob must have designated **read** and **list** access.
+ * Your **target** container or blob must have designated **write** and **list** access.
+
+For more information, *see* [Create SAS tokens](create-sas-tokens.md).
+
+## Client libraries
+
+### [C#/.NET](#tab/csharp)
+
+| [Package (NuGet)][documenttranslation_nuget_package] | [Client library][documenttranslation_client_library_docs] | [REST API][documenttranslation_rest_api] | [Product documentation][documenttranslation_docs] | [Samples][documenttranslation_samples] |
+
+> [!IMPORTANT]
+> This is a prerelease version of the Document Translation SDK. It's made available on an introductory basis so customers can get early access and provide feedback. Prerelease versions are still in development, are subject to change, and certain features may not be supported or might have constrained capabilitiesΓÇödo not use them in production applications.
+
+### Set up your project
+
+In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `batch-document-translation`. This command creates a simple "Hello World" C# project with a single source file: *program.cs*.
+
+```console
+dotnet new console -n batch-document-translation
+```
+
+Change your directory to the newly created app folder. Build your application with the following command:
+
+```console
+dotnet build
+```
+
+The build output should contain no warnings or errors.
+
+```console
+...
+Build succeeded.
+ 0 Warning(s)
+ 0 Error(s)
+...
+```
+
+### Install the client library
+
+Within the application directory, install the Document Translation client library for .NET using one of the following methods:
+
+#### **.NET CLI**
+
+```console
+dotnet add package Azure.AI.Translation.Document --version 1.0.0-beta.2
+```
+
+#### **NuGet Package Manager**
+
+```console
+Install-Package Azure.AI.Translation.Document -Version 1.0.0-beta.2
+```
+
+#### **NuGet PackageReference**
+
+```xml
+<ItemGroup>
+ <!-- ... -->
+<PackageReference Include="Azure.AI.Translation.Document" Version="1.0.0-beta.2" />
+ <!-- ... -->
+</ItemGroup>
+```
+
+From the project directory, open the Program.cs file in your preferred editor or IDE. Add the following using directives:
+
+```csharp
+using Azure;
+using Azure.AI.Translation.Document;
+
+using System;
+using System.Threading;
+```
+
+In the application's **Program** class, create variable for your subscription key and custom endpoint. For details, *see* [Get your custom domain name and subscription key](get-started-with-document-translation.md#get-your-custom-domain-name-and-subscription-key)
+
+```csharp
+private static readonly string endpoint = "<your custom endpoint>";
+private static readonly string subscriptionKey = "<your subscription key>";
+```
+
+### Translate a document or batch files
+
+* To Start a translation operation for one or more documents in a single blob container, you will call the `StartTranslationAsync` method.
+
+* To call `StartTranslationAsync` you need to initialize a `DocumentTranslationInput` object that contains the following parameters:
+
+* **sourceUri**. The SAS URI for the source container containing documents to be translated.
+* **targetUri** The SAS URI for the target container to which the translated documents will be written.
+* **targetLanguageCode**. The language code for the translated documents. You can find language codes on our [Language support](../language-support.md) page.
+
+```csharp
+
+public void StartTranslation() {
+ Uri sourceUri = new Uri("<sourceUrl>");
+ Uri targetUri = new Uri("<targetUrl>");
+
+ DocumentTranslationClient client = new DocumentTranslationClient(new Uri(endpoint), new AzureKeyCredential(subscriptionKey));
+
+ DocumentTranslationInput input = new DocumentTranslationInput(sourceUri, targetUri, "es")
+
+ DocumentTranslationOperation operation = await client.StartTranslationAsync(input);
+
+ await operation.WaitForCompletionAsync();
+
+ Console.WriteLine($ " Status: {operation.Status}");
+ Console.WriteLine($ " Created on: {operation.CreatedOn}");
+ Console.WriteLine($ " Last modified: {operation.LastModified}");
+ Console.WriteLine($ " Total documents: {operation.DocumentsTotal}");
+ Console.WriteLine($ " Succeeded: {operation.DocumentsSucceeded}");
+ Console.WriteLine($ " Failed: {operation.DocumentsFailed}");
+ Console.WriteLine($ " In Progress: {operation.DocumentsInProgress}");
+ Console.WriteLine($ " Not started: {operation.DocumentsNotStarted}");
+
+ await foreach(DocumentStatusResult document in operation.Value) {
+ Console.WriteLine($ "Document with Id: {document.DocumentId}");
+ Console.WriteLine($ " Status:{document.Status}");
+ if (document.Status == TranslationStatus.Succeeded) {
+ Console.WriteLine($ " Translated Document Uri: {document.TranslatedDocumentUri}");
+ Console.WriteLine($ " Translated to language: {document.TranslatedTo}.");
+ Console.WriteLine($ " Document source Uri: {document.SourceDocumentUri}");
+ }
+ else {
+ Console.WriteLine($ " Error Code: {document.Error.ErrorCode}");
+ Console.WriteLine($ " Message: {document.Error.Message}");
+ }
+ }
+}
+```
+
+That's it! You've created a program to translate documents in a blob container using the .NET client library.
+
+### Next steps
+
+> [!div class="nextstepaction"]
+ > [**Explore more C# code samples**](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Translation.Document_1.0.0-beta.2/sdk/translation/Azure.AI.Translation.Document/samples)
+
+<!-- LINKS -->
+
+[documenttranslation_nuget_package]: https://www.nuget.org/packages/Azure.AI.Translation.Document/1.0.0-beta.2
+[documenttranslation_client_library_docs]: https://aka.ms/azsdk/net/documenttranslation/docs
+[documenttranslation_docs]: overview.md
+[documenttranslation_rest_api]: reference/rest-api-guide.md
+[documenttranslation_samples]: https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Translation.Document_1.0.0-beta.1/sdk/translation/Azure.AI.Translation.Document/samples/README.md
+
+### [Python](#tab/python)
+
+| [Package (PyPI)][python-dt-pypi] | [Client library][python-dt-client-library] | [REST API][python-rest-api] | [Product documentation][python-dt-product-docs] | [Samples][python-dt-samples] |
+
+> [!IMPORTANT]
+> This is a prerelease version of the Document Translation SDK. It's made available on an introductory basis so customers can get early access and provide feedback. Prerelease versions are still in development, are subject to change, and certain features may not be supported or might have constrained capabilitiesΓÇödo not use them in production applications.
+
+### Set up your project
+
+### Install the client library
+
+If you haven't done so, install [Python](https://www.python.org/downloads/) and then install the latest version of the Translator client library:
+
+```console
+pip install azure-ai-translation-document
+```
+
+### Create your application
+
+Create a new Python application in your preferred editor or IDE. Then import the following libraries.
+
+```python
+ import os
+ from azure.core.credentials import AzureKeyCredential
+ from azure.ai.translation.document import DocumentTranslationClient
+```
+
+Create variables for your resource subscription key, custom endpoint, sourceUrl, and targetUrl. For
+more information, *see* [Get your custom domain name and subscription key](get-started-with-document-translation.md#get-your-custom-domain-name-and-subscription-key)
+
+```python
+ subscriptionKey = "<your-subscription-key>"
+ endpoint = "<your-custom-endpoint>"
+ sourceUrl = "<your-container-sourceUrl>"
+ targetUrl = "<your-container-targetUrl>"
+```
+
+### Translate a document or batch files
+
+```python
+client = DocumentTranslationClient(endpoint, AzureKeyCredential(subscriptionKey))
+
+ poller = client.begin_translation(sourceUrl, targetUrl, "fr")
+ result = poller.result()
+
+ print("Status: {}".format(poller.status()))
+ print("Created on: {}".format(poller.details.created_on))
+ print("Last updated on: {}".format(poller.details.last_updated_on))
+ print("Total number of translations on documents: {}".format(poller.details.documents_total_count))
+
+ print("\nOf total documents...")
+ print("{} failed".format(poller.details.documents_failed_count))
+ print("{} succeeded".format(poller.details.documents_succeeded_count))
+
+ for document in result:
+ print("Document ID: {}".format(document.id))
+ print("Document status: {}".format(document.status))
+ if document.status == "Succeeded":
+ print("Source document location: {}".format(document.source_document_url))
+ print("Translated document location: {}".format(document.translated_document_url))
+ print("Translated to language: {}\n".format(document.translated_to))
+ else:
+ print("Error Code: {}, Message: {}\n".format(document.error.code, document.error.message))
+```
+
+That's it! You've created a program to translate documents in a blob container using the Python client library.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Explore more Python code samples](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-translation-document_1.0.0b1/sdk/translation/azure-ai-translation-document/samples)
+
+<!-- LINKS -->
+[python-dt-pypi]: https://aka.ms/azsdk/python/texttranslation/pypi
+[python-dt-client-library]: https://aka.ms/azsdk/python/documenttranslation/docs
+[python-rest-api]: reference/rest-api-guide.md
+[python-dt-product-docs]: overview.md
+[python-dt-samples]: https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-translation-document_1.0.0b1/sdk/translation/azure-ai-translation-document/samples
++
+
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Previously updated : 03/05/2021 Last updated : 06/22/2021 # Get started with Document Translation
To get started, you'll need:
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-* A [**Translator**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) service resource (**not** a Cognitive Services resource).
+* A [**single-service Translator resource**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource).
* An [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM). You will create containers to store and organize your blob data within your storage account.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/overview.md
Previously updated : 05/20/2021 Last updated : 06/20/2021 # What is Document Translation?
-Document Translation is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API translates documents to and from 90 languages and dialects while preserving document structure and data format.
+Document Translation is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API translates documents across all [supported languages and dialects](../../language-support.md) while preserving document structure and data format.
This documentation contains the following article types: * [**Quickstarts**](get-started-with-document-translation.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](create-sas-tokens.md) contain instructions for using the feature in more specific or customized ways.
+* [**How-to guides**](create-sas-tokens.md) contain instructions for using the feature in more specific or customized ways.
+* [**Reference**](reference/rest-api-guide.md) provide REST API settings, values , keywords and configuration.
## Document Translation key features | Feature | Description | | | -| | **Translate large files**| Translate whole documents asynchronously.|
-|**Translate numerous files**|Translate multiple files to and from 90 languages and dialects.|
+|**Translate numerous files**|Translate multiple files across all supported languages and dialects while preserving document structure and data format.|
|**Preserve source file presentation**| Translate files while preserving the original layout and format.| |**Apply custom translation**| Translate documents using general and [custom translation](../customization.md#custom-translator) models.| |**Apply custom glossaries**|Translate documents using custom glossaries.|
cognitive-services Cancel Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/cancel-translation.md
Previously updated : 04/21/2021 Last updated : 06/20/2021 # Cancel translation
-Cancel a currently processing or queued operation. An operation won't be canceled if it is already completed or failed or canceling. A bad request will be returned. All documents that have completed translation won't be canceled and will be charged. All pending documents will be canceled if possible.
+Cancel a currently processing or queued operation. An operation won't be canceled if it is already completed, has failed, or is canceling. A bad request will be returned. All documents that have completed translation won't be canceled and will be charged. All pending documents will be canceled if possible.
## Request URL
cognitive-services Get Supported Glossary Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-glossary-formats.md
# Get supported glossary formats
-The Get supported glossary formats method returns a list of supported glossary formats supported by the Document Translation service. The list includes the common file extension used.
+The Get supported glossary formats method returns a list of glossary formats supported by the Document Translation service. The list includes the common file extension used.
## Request URL
cognitive-services Get Translations Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-translations-status.md
Previously updated : 04/21/2021 Last updated : 06/22/2021
The following information is returned in a successful response.
|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|string|Gets high-level error message.| |target|string|Gets the source of the error. For example, it would be "documents" or "document ID" if there was an invalid document.|
-|innerError|InnerTranslationError|New Inner Error format thath conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (this can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (this can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.| |innerError.target|string|Gets the source of the error. For example, it would be "documents" or "document ID" if there was an invalid document.|
The following is an example of a successful response.
```JSON {
- "value": [
- {
- "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
- "createdDateTimeUtc": "2021-03-23T07:03:30.013631Z",
- "lastActionDateTimeUtc": "2021-03-26T01:00:00Z",
- "status": "Succeeded",
- "summary": {
- "total": 10,
- "failed": 1,
- "success": 9,
- "inProgress": 0,
- "notYetStarted": 0,
- "cancelled": 0,
- "totalCharacterCharged": 1000
- }
- }
- ]
+ "value": [
+ {
+ "id": "36724748-f7a0-4db7-b7fd-f041ddc75033",
+ "createdDateTimeUtc": "2021-06-18T03:35:30.153374Z",
+ "lastActionDateTimeUtc": "2021-06-18T03:36:44.6155316Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 3,
+ "failed": 2,
+ "success": 1,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+ },
+ {
+ "id": "1c7399a7-6913-4f20-bb43-e2fe2ba1a67d",
+ "createdDateTimeUtc": "2021-05-24T17:57:43.8356624Z",
+ "lastActionDateTimeUtc": "2021-05-24T17:57:47.128391Z",
+ "status": "Failed",
+ "summary": {
+ "total": 1,
+ "failed": 1,
+ "success": 0,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+ },
+ {
+ "id": "daa2a646-4237-4f5f-9a48-d515c2d9af3c",
+ "createdDateTimeUtc": "2021-04-14T19:49:26.988272Z",
+ "lastActionDateTimeUtc": "2021-04-14T19:49:43.9818634Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 2,
+ "failed": 0,
+ "success": 2,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 21899
+ }
+ }
+ ],
+ ""@nextLink": "https://westus.cognitiveservices.azure.com/translator/text/batch/v1.0/operations/727BF148-F327-47A0-9481-ABAE6362F11E/documents?$top=5&$skip=15"
}+ ``` ### Example error response
cognitive-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/rest-api-guide.md
+
+ Title: "Document Translation REST API reference guide"
+
+description: View a list of with links to the Document Translation REST APIs.
++++++ Last updated : 06/21/2021+++
+# Document Translation REST API reference guide
+
+Document Translation is a cloud-based feature of the Azure Translator service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API translates documents across all [supported languages and dialects](../../language-support.md) while preserving document structure and data format. The available methods are listed in the table below:
+
+| Request| Description|
+||--|
+| [**Get supported document formats**](get-supported-document-formats.md)| This method returns a list of supported document formats.|
+|[**Get supported glossary formats**](get-supported-glossary-formats.md)|This method returns a list of supported glossary formats.|
+|[**Get supported storage sources**](get-supported-storage-sources.md)| This method returns a list of supported storage sources/options.|
+|[**Start translation (POST)**](start-translation.md)|This method starts a document translation job. |
+|[**Get documents status**](get-documents-status.md)|This method returns the status of a all documents in a translation job.|
+|[**Get document status**](get-document-status.md)| This method returns the status for a specific document in a job. |
+|[**Get translations status**](get-translations-status.md)| This method returns a list of all translation requests submitted by a user and the status for each request.|
+|[**Get translation status**](get-translation-status.md) | This method returns a summary of the status for a specific document translation request. It includes the overall request status and the status for documents that are being translated as part of that request.|
+|[**Cancel translation (DELETE)**](cancel-translation.md)| This method cancels a document translation that is currently processing or queued. |
+
+> [!div class="nextstepaction"]
+> [Explore our client libraries and SDKs for C# and Python.](../client-sdks.md)
cognitive-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/start-translation.md
Previously updated : 04/21/2021 Last updated : 06/22/2021 # Start translation
-Use this API to start a bulk (batch) translation request with the Document Translation service. Each request can contain multiple documents and must contain a source and destination container for each document.
+Use this API to start a translation request with the Document Translation service. Each request can contain multiple documents and must contain a source and destination container for each document.
The prefix and suffix filter (if supplied) are used to filter folders. The prefix is applied to the subpath after the container name.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/whats-new.md
Previously updated : 05/18/2021 Last updated : 06/22/2021
Review the latest updates to the text Translator service. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+## June 2021
+
+### [Document Translation client libraries for C#/.NET and Python](document-translation/client-sdks.md)ΓÇönow available in prerelease.
+
+## May 2021
+
+### [Document Translation ΓÇò now in general availability](https://www.microsoft.com/translator/blog/2021/05/25/translate-full-documents-with-document-translation-%e2%80%95-now-in-general-availability/)
+
+* **Feature release**: Translator's [Document Translation](document-translation/overview.md) feature is generally available. Document Translation is designed to translate large files and batch documents with rich content while preserving original structure and format. You can also use custom glossaries and custom models built with [Custom Translator](custom-translator/overview.md) to ensure your documents are translated quickly and accurately.
+
+### [Translator service available in containers](https://www.microsoft.com/translator/blog/2021/05/25/translator-service-now-available-in-containers/)
+
+* **New release**: Translator service is available in containers as a gated preview. [Submit an online request](https://aka.ms/csgate-translator) and have it approved prior to getting started. Containers enable you to run several Translator service features in your own environment and are great for specific security and data governance requirements. *See*, [Install and run Translator containers (preview)](containers/translator-how-to-install-container.md)
+ ## February 2021 ### [Document Translation public preview](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/) * **New release**: [Document Translation](document-translation/overview.md) is available as a preview feature of the Translator Service. Preview features are still in development and aren't meant for production use. They're made available on a "preview" basis so customers can get early access and provide feedback. Document Translation enables you to translate large documents and process batch files while still preserving the original structure and format. _See_ [Microsoft Translator blog: Introducing Document Translation](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
-### [Text translation support for 9 added languages](https://www.microsoft.com/translator/blog/2021/02/22/microsoft-translator-releases-nine-new-languages-for-international-mother-language-day-2021/)
+### [Text translation support for nine added languages](https://www.microsoft.com/translator/blog/2021/02/22/microsoft-translator-releases-nine-new-languages-for-international-mother-language-day-2021/)
* Translator service has [text translation language support](language-support.md#text-translation) for the following languages:
Review the latest updates to the text Translator service. Bookmark this page to
### [Text translation support for Odia](https://www.microsoft.com/translator/blog/2020/08/13/odia-language-text-translation-is-now-available-in-microsoft-translator/)
-* **Odia** is a classical language spoken by 35 million people in India and across the world. It joins **Bangla**, **Gujarati**, **Hindi**, **Kannada**, **Malayalam**, **Marathi**, **Punjabi**, **Tamil**, **Telugu**, **Urdu**, and **English** as the twelfth most used language of India supported by Microsoft Translator.
+* **Odia** is a classical language spoken by 35 million people in India and across the world. It joins **Bangla**, **Gujarati**, **Hindi**, **Kannada**, **Malayalam**, **Marathi**, **Punjabi**, **Tamil**, **Telugu**, **Urdu**, and **English** as the 12th most used language of India supported by Microsoft Translator.
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
The high-level architecture for this use-case looks like this:
![Architecture for Teams interop](./media/call-flows/teams-interop.png)
-While certain Teams meeting features such as raised hand, together mode, and breakout rooms will only be available for Teams users, your custom application will have access to the meeting's core audio, video, chat, and screen sharing capabilities. Meeting chat will be accessible to your custom application user while they're in the call. They won't be able to send or receive messages before joining or after leaving the call.
+Communication Services users can join scheduled Teams meetings as long as anonymous joins are enabled in the [meeting settings](/microsoftteams/meeting-settings-in-teams).
-When a Communication Services user joins the Teams meeting, the display name provided through the Calling SDK will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams. Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
-
-Communication Services Teams Interop is currently in private preview. When generally available, Communication Services users will be treated like "External access users". Learn more about external access in [Call, chat, and collaborate with people outside your organization in Microsoft Teams](/microsoftteams/communicate-with-users-from-other-organizations).
+While certain Teams meeting features such as raised hand, together mode, and breakout rooms will only be available for Teams users, your custom application will have access to the meeting's core audio, video, chat, and screen sharing capabilities. Meeting chat will be accessible to your custom application user while they're in the call. They won't be able to send or receive messages before joining or after leaving the call. If the meeting is scheduled for a channel, Communication Services users will not be able to join the chat or send and receive messages.
-Communication Services users can join scheduled Teams meetings as long as anonymous joins are enabled in the [meeting settings](/microsoftteams/meeting-settings-in-teams). If the meeting is scheduled for a channel, Communication Services users will not be able to join the chat or send and receive messages.
+When a Communication Services user joins the Teams meeting, the display name provided through the Calling SDK will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams. Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
## Teams in Government Clouds (GCC) Azure Communication Services interoperability isn't compatible with Teams deployments using [Microsoft 365 government clouds (GCC)](/MicrosoftTeams/plan-for-government-gcc) at this time.
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-containers.md
Get started with a sample application and deployment on AKS [here](https://graph
Occlum supports AKS deployments. Follow the deployment instructions with various sample apps [here](https://github.com/occlum/occlum/blob/master/docs/azure_aks_deployment_guide.md)
+### Marblerun
+
+[Marblerun](https://marblerun.sh/) is an orchestration framework for confidential containers. It makes it easy to run and scale confidential services on SGX-enabled Kubernetes. Marblerun takes care of boilerplate tasks like verifying the services in your cluster, managing secrets for them, and establishing enclave-to-enclave mTLS connections between them. Marblerun also ensures that your cluster of confidential containers adheres to a manifest defined in simple JSON. The manifest can be verified by external clients via remote attestation.
+
+![Marblerun Flow](./media/confidential-containers/marblerun-workflow.png)
+
+In a nutshell, Marblerun extends the confidentiality, integrity, and verifiability properties of a single enclave to a Kubernetes cluster.
+
+Marblerun supports confidential containers created with Graphene, Occlum, and EGo. Examples for each SDK are given [here](https://www.marblerun.sh/docs/examples/). Marblerun is built to run on Kubernetes and alongside your existing cloud-native tooling. It comes with an easy-to-use CLI and helm charts. It has first-class support for confidential computing nodes on AKS. Information on how to deploy Marblerun on AKS can be found [here](https://www.marblerun.sh/docs/deployment/cloud/).
## Confidential Containers Demo View the confidential healthcare demo with confidential containers. Sample is available [here](/azure/architecture/example-scenario/confidential/healthcare-inference).
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-azureblobstorage.md
Title: Connect to Azure Blob Storage
-description: Create and manage blobs in Azure storage accounts by using Azure Logic Apps
+description: Create and manage blobs in Azure storage accounts by using Azure Logic Apps.
ms.suite: integration-+ Previously updated : 02/21/2020 Last updated : 06/23/2021 tags: connectors # Create and manage blobs in Azure Blob Storage by using Azure Logic Apps
-This article shows how you can access and manage files stored as blobs in your Azure storage account from inside a logic app with the Azure Blob Storage connector. That way, you can create logic apps that automate tasks and workflows for managing your files. For example, you can build logic apps that create, get, update, and delete files in your storage account.
+You can access and manage files stored as blobs in your Azure storage account within Azure Logic Apps using the [Azure Blob Storage connector](/connectors/azureblobconnector/). This connector provides triggers and actions for blob operations within your logic app workflows. You can use these operations to automate tasks and workflows for managing the files in your storage account. [Available connector actions](/connectors/azureblobconnector/#actions) include checking, deleting, reading, and uploading blobs. The [available trigger](/azureblobconnector/#triggers) fires when a blob is added or modified.
-Suppose that you have a tool that gets updated on an Azure website. which acts as the trigger for your logic app. When this event happens, you can have your logic app update some file in your blob storage container, which is an action in your logic app.
+You can connect to Blob Storage from both Standard and Consumption logic app resource types. You can use the connector with logic apps in a single-tenant, multi-tenant, or integration service environment (ISE). For logic apps in a single-tenant environment, Blob Storage provides built-in operations and also managed connector operations.
-If you're new to logic apps, review [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). For connector-specific technical information, see the [Azure Blob Storage connector reference](/connectors/azureblobconnector/).
+> [!NOTE]
+> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
+> this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
+
+For more technical details about this connector, such as triggers, actions, and limits, see the [connector's reference page](/connectors/azureblobconnector/).
+
+You can also [use a managed identity with an HTTP trigger or action to do blob operations](#access-blob-storage-with-managed-identities) instead the Blob Storage connector.
> [!IMPORTANT] > Logic apps can't directly access storage accounts that are behind firewalls if they're both in the same region. As a workaround, > you can have your logic apps and storage account in different regions. For more information about enabling access from Azure Logic
-> Apps to storage accounts behind firewalls, see the [Access storage accounts behind firewalls](#storage-firewalls) section later in this topic.
-
-<a name="blob-storage-limits"></a>
-
-## Limits
-
-* By default, Azure Blob Storage actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB but up to 1024 MB, Azure Blob Storage actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The **Get blob content** action implicitly uses chunking.
-
-* Azure Blob Storage triggers don't support chunking. When requesting file content, triggers select only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern:
-
- * Use an Azure Blob Storage trigger that returns file properties, such as **When a blob is added or modified (properties only)**.
-
- * Follow the trigger with the Azure Blob Storage **Get blob content** action, which reads the complete file and implicitly uses chunking.
+> Apps to storage accounts behind firewalls, see the [Access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls) section later in this topic.
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+- An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+- An [Azure storage account and storage container](../storage/blobs/storage-quickstart-blobs-portal.md)
+- A logic app workflow from which you want to access your Blob Storage account. If you want to start your logic app with a Blob Storage trigger, you need a [blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-* An [Azure storage account and storage container](../storage/blobs/storage-quickstart-blobs-portal.md)
-
-* The logic app where you need access to your Azure blob storage account. To start your logic app with an Azure Blob Storage trigger, you need a [blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-
-<a name="add-trigger"></a>
-
-## Add blob storage trigger
-
-In Azure Logic Apps, every logic app must start with a [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts), which fires when a specific event happens or when a specific condition is met. Each time the trigger fires, the Logic Apps engine creates a logic app instance and starts running your app's workflow.
-
-This example shows how you can start a logic app workflow with the **When a blob is added or modified (properties only)** trigger when a blob's properties gets added or updated in your storage container.
-
-1. In the [Azure portal](https://portal.azure.com) or Visual Studio, create a blank logic app, which opens Logic App Designer. This example uses the Azure portal.
+## Limits
-2. In the search box, enter "azure blob" as your filter. From the triggers list, select the trigger you want.
+- By default, Blob Storage actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB but up to 1024 MB, Blob Storage actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The [**Get blob content** action](/connectors/azureblobconnector/#get-blob-content) implicitly uses chunking.
+- The Blob Storage triggers don't support chunking. When requesting file content, triggers select only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern:
+ - Use a Blob Storage trigger that returns file properties, such as [**When a blob is added or modified (properties only)**](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)).
+ - Follow the trigger with the Blob Storage [**Get blob content** action](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking.
- This example uses this trigger: **When a blob is added or modified (properties only)**
+## Add Blob Storage trigger
- ![Select Azure Blob Storage trigger](./media/connectors-create-api-azureblobstorage/add-azure-blob-storage-trigger.png)
+In Logic Apps, every logic app must start with a [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts), which fires when a specific event happens or when a specific condition is met.
-3. If you're prompted for connection details, [create your blob storage connection now](#create-connection). Or, if your connection already exists, provide the necessary information for the trigger.
+This connector has one available trigger, called either [**When a blob is Added or Modified in Azure Storage** or **When a blob is added or modified (properties only)**](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)). The trigger fires when a blob's properties are added or updated in your storage container. Each time, the Logic Apps engine creates a logic app instance and starts running your workflow.
- For this example, select the container and folder you want to monitor.
+### [Single-tenant](#tab/single-tenant)
- 1. In the **Container** box, select the folder icon.
+To add a Blob Storage action in a single-tenant logic app that uses a Standard plan:
- 2. In the folder list, choose the right-angle bracket ( **>** ), and then browse until you find and select the folder you want.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open your workflow in the designer.
+1. In the search box, enter `Azure blob` as your filter. From the triggers list, select the trigger named **When a blob is Added or Modified in Azure Storage**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-add.png" alt-text="Screenshot of Standard logic app in designer, showing selection of trigger named When a blob is Added or Modified in Azure Storage.":::
+1. If you're prompted for connection details, [create your blob storage connection now](#connect-to-storage-account).
+1. Provide the necessary information for the trigger.
+ 1. Under the **Parameters** tab, add the **Blob Path** to the blob you want to monitor.
+ To find your blob path, open your storage account in the Azure portal. In the navigation menu, under **Data Storage**, select **Containers**. Select your blob container. On the container navigation menu, under **Settings**, select **Properties**. Copy the **URL** value, which is the path to the blob. The path resembles `https://{your-storage-account}.blob.core.windows.net/{your-blob}`.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-configure.png" alt-text="Screenshot of Standard logic app in designer, showing parameters configuration for blob storage trigger.":::
+ 1. Configure other trigger settings as needed.
+ 1. Select **Done**.
+1. Add one or more actions to your workflow.
+1. In the designer toolbar, select **Save** to save your changes.
- ![Select storage folder to use with trigger](./media/connectors-create-api-azureblobstorage/trigger-select-folder.png)
+### [Multi-tenant](#tab/multi-tenant)
- 3. Select the interval and frequency for how often you want the trigger to check the folder for changes.
+To add a Blob Storage action in a multi-tenant logic app that uses a Consumption plan:
-4. When you're done, on the designer toolbar, choose **Save**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open your workflow in the Logic Apps Designer.
+1. In the search box, enter "Azure blob" as your filter. From the triggers list, select the trigger **When a blob is added or modified (properties only)**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-add.png" alt-text="Screenshot of Consumption logic app in designer, showing selection of blob storage trigger.":::
+1. If you're prompted for connection details, [create your blob storage connection now](#connect-to-storage-account).
+1. Provide the necessary information for the trigger.
+ 1. For **Container**, select the folder icon to choose your blob storage container. Or, enter the path manually.
+ 1. Configure other trigger settings as needed.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-configure.png" alt-text="Screenshot of Consumption logic app in designer, showing parameters configuration for blob storage trigger.":::
+1. Add one or more actions to your workflow.
+1. In the designer toolbar, select **Save** to save your changes.
-5. Now continue adding one or more actions to your logic app for the tasks you want to perform with the trigger results.
+ <a name="add-action"></a>
-## Add blob storage action
-
-In Azure Logic Apps, an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is a step in your workflow that follows a trigger or another action. For this example, the logic app starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md).
-
-1. In the [Azure portal](https://portal.azure.com) or Visual Studio, open your logic app in Logic App Designer. This example uses the Azure portal.
-
-2. In the Logic App Designer, under the trigger or action, choose **New step**.
+## Add Blob Storage action
- ![Add new step to logic app workflow](./media/connectors-create-api-azureblobstorage/add-new-step-logic-app-workflow.png)
+In Logic Apps, an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is a step in your workflow that follows a trigger or another action.
- To add an action between existing steps, move your mouse over the connecting arrow. Choose the plus sign (**+**) that appears, and select **Add an action**.
+### [Single-tenant](#tab/single-tenant)
-3. In the search box, enter "azure blob" as your filter. From the actions list, select the action you want.
+For logic apps in a single-tenant environment:
- This example uses this action: **Get blob content**
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open your workflow in the Logic Apps Designer.
+1. Add a trigger. This example starts with the [**Recurrence** trigger](../connectors/connectors-native-recurrence.md).
+1. Add a new step to your workflow. In the search box, enter "Azure blob" as your filter. Then, select the Blob Storage action that you want to use. This example uses **Reads Blob Content from Azure Storage**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-action-add.png" alt-text="Screenshot of Standard logic app in designer, showing list of available Blob Storage actions.":::
+1. If you're prompted for connection details, [create a connection to your Blob Storage account](#connect-to-storage-account).
+1. Provide the necessary information for the action.
+ 1. For **Container Name**, enter the path for the blob container you want to use.
+ 1. For **Blob name**, enter the path for the blob you want to use.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-action-configure.png" alt-text="Screenshot of Standard logic app in designer, showing selection of Blob Storage trigger.":::
+ 1. Configure other action settings as needed.
+1. On the designer toolbar, select **Save**.
+1. Test your logic app to make sure your selected container contains a blob.
- ![Select Azure Blob Storage action](./media/connectors-create-api-azureblobstorage/add-azure-blob-storage-action.png)
+> [!TIP]
+> This example only reads the contents of a blob. To view the contents, add another action that creates a file with the blob by using another connector. For example, add a OneDrive action that creates a file based on the blob contents.
-4. If you're prompted for connection details, [create your Azure Blob Storage connection now](#create-connection).
-Or, if your connection already exists, provide the necessary information for the action.
+### [Multi-tenant](#tab/multi-tenant)
- For this example, select the file you want.
+For logic apps in a multi-tenant environment:
- 1. From the **Blob** box, select the folder icon.
-
- ![Select storage folder to use with action](./media/connectors-create-api-azureblobstorage/action-select-folder.png)
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open your workflow in the Logic Apps Designer.
+1. Add a trigger. This example starts with the [**Recurrence** trigger](../connectors/connectors-native-recurrence.md).
+1. Add a new step to your workflow. In the search box, enter "Azure blob" as your filter. Then, select the Blob Storage action that you want to use. This example uses **Get blob content**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-add.png" alt-text="Screenshot of Consumption logic app in designer, showing list of available Blob Storage actions.":::
+1. If you're prompted for connection details, [create a connection to your Blob Storage account](#connect-to-storage-account).
+1. Provide the necessary information for the action.
+ 1. For **Blob**, select the folder icon to choose your blob storage container. Or, enter the path manually.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-configure.png" alt-text="Screenshot of Consumption logic app in designer, showing configuration of Blob Storage action.":::
+ 1. Configure other action settings as needed.
- 2. Find and select the file you want based on the blob's **ID** number. You can find this **ID** number in the blob's metadata that is returned by the previously described blob storage trigger.
-
-5. When you're done, on the designer toolbar, choose **Save**.
-To test your logic app, make sure that the selected folder contains a blob.
-
-This example only gets the contents for a blob. To view the contents, add another action that creates a file with the blob by using another connector. For example, add a OneDrive action that creates a file based on the blob contents.
-
-<a name="create-connection"></a>
+ ## Connect to storage account [!INCLUDE [Create connection general intro](../../includes/connectors-create-connection-general-intro.md)]
-1. When you're prompted to created the connection, provide this information:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection Name** | Yes | <*connection-name*> | The name to create for your connection |
- | **Storage Account** | Yes | <*storage-account*> | Select your storage account from the list. |
- ||||
-
- For example:
+Before you can configure your [blob storage trigger](#add-blob-storage-trigger) or [blob storage action](#add-blob-storage-action), you need to connect to a storage account. A connection requires the following properties.
- ![Create Azure Blob storage account connection](./media/connectors-create-api-azureblobstorage/create-storage-account-connection.png)
+| Property | Required | Value | Description |
+|-|-|-|-|
+| **Connection Name** | Yes | <*connection-name*> | The name to create for your connection |
+| **Azure Blob Storage Connection String** | Yes | <*storage-account*> | Select your storage account from the list, or provide a string. |
-1. When you're ready, select **Create**
+> [!TIP]
+> To find a connection string, go to the storage account's page. In the navigation menu, under **Security + networking**, select **Access keys**. Select **Show keys**. Copy one of the two available connection string values.
-1. After you create your connection, continue with [Add blob storage trigger](#add-trigger) or [Add blob storage action](#add-action).
+### [Single-tenant](#tab/single-tenant)
-## Connector reference
+For logic apps in a single-tenant environment:
-For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/azureblobconnector/).
+1. For **Connection name**, enter a name for your connection.
+1. For **Azure Blob Storage Connection String**, enter the connection string for the storage account you want to use.
+1. Select **Create** to establish your connection.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-connection-create.png" alt-text="Screenshot of Standard logic app in designer, showing prompt to add a new connection to a blob storage step.":::
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
-
-<a name="storage-firewalls"></a>
-
-## Access storage accounts behind firewalls
+If you already have an existing connection, but you want to choose a different one, select **Change connection** in the step's editor.
-You can add network security to an Azure storage account by restricting access with a [firewall and firewall rules](../storage/common/storage-network-security.md). However, this setup creates a challenge for Azure and other Microsoft services that need access to the storage account. Local communication in the datacenter abstracts the internal IP addresses, so you can't set up firewall rules with IP restrictions. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
+If you have problems connecting to your storage account, see [how to access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls).
-Here are various options for accessing storage accounts behind firewalls from Azure Logic Apps by using either the Azure Blob Storage connector or other solutions:
+### [Multi-tenant](#tab/multi-tenant)
-* Azure Storage Blob connector
+For logic apps in a multi-tenant environment:
- * [Access storage accounts in other regions](#access-other-regions)
- * [Access storage accounts through a trusted virtual network](#access-trusted-virtual-network)
+1. For **Connection name**, enter a name for your connection.
+1. For **Storage Account**, select the storage account that your blob container is in. Or, select **Manually enter connection information** to provide the path yourself.
+1. Select **Create** to establish your connection.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-connection-create.png" alt-text="Screenshot of Consumption logic app in designer, showing prompt to add a new connection to a blob storage step.":::
-* Other solutions
-
- * [Access storage accounts as a trusted service with managed identities](#access-trusted-service)
- * [Access storage accounts through Azure API Management](#access-api-management)
-
-<a name="access-other-regions"></a>
-
-### Problems accessing storage accounts in the same region
+
-Logic apps can't directly access storage accounts behind firewalls when they're both in the same region. As a workaround, put your logic apps in a region that differs from your storage account and give access to the [outbound IP addresses for the managed connectors in your region](../logic-apps/logic-apps-limits-and-config.md#outbound).
+## Access storage accounts behind firewalls
-> [!NOTE]
-> This solution doesn't apply to the Azure Table Storage connector and Azure Queue Storage connector. Instead, to access your Table Storage or Queue Storage, use the built-in HTTP trigger and actions.
+You can add network security to an Azure storage account by [restricting access with a firewall and firewall rules](../storage/common/storage-network-security.md). However, this setup creates a challenge for Azure and other Microsoft services that need access to the storage account. Local communication in the data center abstracts the internal IP addresses, so you can't set up firewall rules with IP restrictions.
-<a name="access-trusted-virtual-network"></a>
+To access storage accounts behind firewalls using the Blob Storage connector:
-### Access storage accounts through a trusted virtual network
+- [Access storage accounts in other regions](#access-storage-accounts-in-other-regions)
+- [Access storage accounts through a trusted virtual network](#access-storage-accounts-through-trusted-virtual-network)
-You can put the storage account in an Azure virtual network that you manage, and then add that virtual network to the trusted virtual networks list. To have your logic app access the storage account through a [trusted virtual network](../virtual-network/virtual-networks-overview.md), you need to deploy that logic app to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), which can connect to resources in a virtual network. You can then add the subnets in that ISE to the trusted list. Azure Storage connectors, such as the Blob Storage connector, can directly access the storage container. This setup is the same experience as using the service endpoints from an ISE.
+Other solutions for accessing storage accounts behind firewalls:
-<a name="access-trusted-service"></a>
+- [Access storage accounts as a trusted service with managed identities](#access-blob-storage-with-managed-identities)
+- [Access storage accounts through Azure API Management](#access-storage-accounts-through-azure-api-management)
-### Access storage accounts as a trusted service with managed identities
+### Access storage accounts in other regions
-To give Microsoft trusted services access to a storage account through a firewall, you can set up an exception on that storage account for those services. This solution permits Azure services that support [managed identities for authentication](../active-directory/managed-identities-azure-resources/overview.md) to access storage accounts behind firewalls as trusted services. Specifically, for a logic app in global multi-tenant Azure to access these storage accounts, you first [enable managed identity support](../logic-apps/create-managed-service-identity.md) on the logic app. Then, you use the HTTP action or trigger in your logic app and [set their authentication type to use your logic app's managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). For this scenario, you can use *only* the HTTP action or trigger.
+Logic apps can't directly access storage accounts behind firewalls when they're both in the same region. As a workaround, put your logic apps in a different region than your storage account. Then, give access to the [outbound IP addresses for the managed connectors in your region](../logic-apps/logic-apps-limits-and-config.md#outbound).
> [!NOTE]
-> If you use the managed identity capability for authenticating access to your storage account,
-> you can't use the built-in Azure Blob Storage operations. You have to use the HTTP trigger
-> or action that has the managed identity set up to authenticate your storage account connection.
-> To run the necessary storage operations, you then have to call the corresponding REST APIs
-> for Azure Blob Storage. For more information, review the
-> [Blob service REST API](/rest/api/storageservices/blob-service-rest-api).
+> This solution doesn't apply to the Azure Table Storage connector and Azure Queue Storage connector. Instead, to access your Table Storage or Queue Storage, [use the built-in HTTP trigger and actions](../logic-apps/logic-apps-http-endpoint.md).
-To set up the exception and managed identity support, follow these general steps:
+To add your outbound IP addresses to the storage account firewall:
-1. On your storage account, under **Settings**, select **Firewalls and virtual networks**. Under **Allow access from**, select the **Selected networks** option so that the related settings appear.
+1. Note the [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#outbound) for your logic app's region.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open your storage account's page. In the navigation menu, under **Security + networking**, select **Networking**.
+1. Under **Allow access from**, select the **Selected networks** option. Related settings now appear on the page.
+1. Under **Firewall**, add the IP addresses or ranges that need access.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/storage-ip-configure.png" alt-text="Screenshot of blob storage account networking page in Azure portal, showing firewall settings to add IP addresses and ranges to the allowlist.":::
-1. Under **Exceptions**, select **Allow trusted Microsoft services to access this storage account**, and then select **Save**.
+### Access storage accounts through trusted virtual network
- ![Select exception that allows Microsoft trusted services](./media/connectors-create-api-azureblobstorage/allow-trusted-services-firewall.png)
+You can put the storage account in an Azure virtual network that you manage, and then add that virtual network to the trusted virtual networks list. To give your logic app access to the storage account through a [trusted virtual network](../virtual-network/virtual-networks-overview.md), you need to deploy that logic app to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), which can connect to resources in a virtual network. You can then add the subnets in that ISE to the trusted list. Azure Storage connectors, such as the Blob Storage connector, can directly access the storage container. This setup is the same experience as using the service endpoints from an ISE.
-1. In your logic app's settings, [enable support for the managed identity](../logic-apps/create-managed-service-identity.md).
+### Access storage accounts through Azure API Management
-1. In your logic app's workflow, add and set up the HTTP action or trigger to access the storage account or entity.
+If you use a dedicated tier for [API Management](../api-management/api-management-key-concepts.md), you can front the Storage API by using API Management and permitting the latter's IP addresses through the firewall. Basically, add the Azure virtual network that's used by API Management to the storage account's firewall setting. You can then use either the API Management action or the HTTP action to call the Azure Storage APIs. However, if you choose this option, you have to handle the authentication process yourself. For more info, see [Simple enterprise integration architecture](/azure/architecture/reference-architectures/enterprise-integration/basic-enterprise-integration).
- > [!IMPORTANT]
- > For outgoing HTTP action or trigger calls to Azure Storage accounts,
- > make sure that the request header includes the `x-ms-version` property
- > and the API version for the operation that you want to run on the storage account.
- > For more information, see [Authenticate access with managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity) and
- > [Versioning for Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests).
+## Access Blob Storage with managed identities
-1. On that action, [select the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity) to use for authentication.
+If you want to access Blob Storage without using this Logic Apps connector, you can use [managed identities for authentication](../active-directory/managed-identities-azure-resources/overview.md) instead. You can create an exception that gives Microsoft trusted services, such as a managed identity, access to your storage account through a firewall.
-<a name="access-api-management"></a>
+To use managed identities in your logic app to access Blob Storage:
-### Access storage accounts through Azure API Management
+1. [Configure access to your storage account](#configure-storage-account-access)
+1. [Create a role assignment for your logic app](#create-role-assignment-for-logic-app)
+1. [Enable support for the managed identity in your logic app](#enable-support-for-managed-identity-in-logic-app)
-If you use a dedicated tier for [API Management](../api-management/api-management-key-concepts.md), you can front the Storage API by using API Management and permitting the latter's IP addresses through the firewall. Basically, add the Azure virtual network that's used by API Management to the storage account's firewall setting. You can then use either the API Management action or the HTTP action to call the Azure Storage APIs. However, if you choose this option, you have to handle the authentication process yourself. For more info, see [Simple enterprise integration architecture](/azure/architecture/reference-architectures/enterprise-integration/basic-enterprise-integration).
+> [!NOTE]
+> Limitations for this solution:
+> - You can *only* use the HTTP trigger or action in your workflow.
+> - You must set up a managed identity to authenticate your storage account connection.
+> - You can't use built-in Blob Storage operations if you authenticate with a managed identity.
+> - For logic apps in a single-tenant environment, only the system-assigned managed identity is available and supported, not the user-assigned managed identity.
+
+### Configure storage account access
+
+To set up the exception and managed identity support, first configure appropriate access to your storage account:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open your storage account's page. In the navigation menu, under **Security + networking**, select **Networking**.
+1. Under **Allow access from**, select the **Selected networks** option. Related settings now appear on the page.
+1. If you need to access the storage account from your computer, under **Firewall**, enable **Add your client IP address**.
+1. Under **Exceptions**, enable **Allow trusted Microsoft services to access this storage account**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/storage-networking-configure.png" alt-text="Screenshot of blob storage account networking page in Azure portal, showing settings to allow selected networks, client IP address, and trusted Microsoft services.":::
+1. Select **Save**.
+
+> [!TIP]
+> If you receive a **403 Forbidden** error when you try to connect to the storage account from your workflow, there are multiple possible causes. Try the following resolution before moving on to additional steps. First, disable the setting **Allow trusted Microsoft services to access this storage account** and save your changes. Then, re-enable the setting, and save your changes again.
+
+### Create role assignment for logic app
+
+Next, [enable managed identity support](../logic-apps/create-managed-service-identity.md) on your logic app.
+
+1. Open your logic app in the Azure portal.
+1. In the navigation menu, under **Settings**, select **Identity.**
+1. Under **System assigned**, set **Status** to **On**. This setting might already be enabled.
+1. Under **Permissions**, select **Azure role assignments**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/role-assignment-add-1.png" alt-text="Screenshot of logic app menu in Azure portal, showing identity settings pane with button to add Azure role assignment permissions.":::
+1. On the **Azure role assignments** pane, select **Add role assignment**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/role-assignment-add-2.png" alt-text="Screenshot of logic app role assignments pane, showing selected subscription and button to add a new role assignment.":::
+1. Configure the new role assignment as follows.
+ 1. For **Scope**, select **Storage**.
+ 1. For **Subscription**, choose the subscription that your storage account is in.
+ 1. For **Resource**, choose the storage account that you want to access from your logic app.
+ 1. For **Role**, select the appropriate permissions for your scenario. This example uses **Storage Blob Data Contributor**, which allows read, write, and delete access to blob containers and date. Hover over the information icon next to a role in the drop-down menu for permissions details.
+ 1. Select **Save** to finish creating the role assignment.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/role-assignment-configure.png" alt-text="Screenshot of role assignment configuration pane, showing settings for scope, subscription, resource, and role.":::
+
+### Enable support for managed identity in logic app
+
+Next, add an [HTTP trigger or action](/connectors/connectors-native-http) in your workflow. Make sure to [set the authentication type to use the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity).
+
+The steps are the same for logic apps in both single-tenant and multi-tenant environments.
+
+1. Open your workflow in the Logic Apps Designer.
+1. Add a new step to your workflow with an **HTTP** trigger or action, depending on your scenario.
+1. Configure all required parameters for your **HTTP** trigger or action.
+ 1. Choose a **Method** for your request. This example uses the HTTP PUT method.
+ 1. Enter the **URI** for your blob. The path resembles `https://{your-storage-account}.blob.core.windows.net/{your-blob}`.
+ 1. Under **Headers**, add the blob type header `x-ms-blob-type` with the value `BlockBlob`.
+ 1. Under **Headers**, also add the API version header `x-ms-version` with the appropriate value. For more information, see [Authenticate access with managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity) and [Versioning for Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests).
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/managed-identity-connect.png" alt-text="Screenshot of Logic Apps Designer, showing required HTTP PUT action parameters.":::
+1. Select **Add a new parameter** and choose **Authentication** to [configure the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity).
+ 1. Under **Authentication**, for **Authentication type**, choose **Managed identity**.
+ 1. For **Managed identity**, choose **System-assigned managed identity**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/managed-identity-authenticate.png" alt-text="Screenshot of Logic Apps Designer, showing HTTP action authentication parameter settings for managed identity.":::
+1. In the Logic Apps Designer toolbar, select **Save**.
+
+Now, you can call the [Blob service REST API](/rest/api/storageservices/blob-service-rest-api) to run any necessary storage operations.
## Next steps
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+> [!div class="nextstepaction"]
+> [Learn more about Logic Apps connectors](../connectors/apis-list.md)
cosmos-db Cosmos Db Advanced Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmos-db-advanced-queries.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries (SQL API)
+
+description: Learn how to query diagnostics logs to troubleshoot data stored in Azure Cosmos DB - SQL API
++++ Last updated : 06/12/2021+++
+# Troubleshoot issues with advanced diagnostics queries for SQL (Core) API
++
+> [!div class="op_single_selector"]
+> * [SQL (Core) API](cosmos-db-advanced-queries.md)
+> * [MongoDB API](queries-mongo.md)
+> * [Cassandra API](queries-cassandra.md)
+> * [Gremlin API](queries-gremlin.md)
+>
+
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+
+For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+
+## Common queries
+
+- Top N(10) queries ordered by request units consumption in a given time frame
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let topRequestsByRUcharge = CDBDataPlaneRequests
+ | where TimeGenerated > ago(24h)
+ | project RequestCharge , TimeGenerated, ActivityId;
+ CDBQueryRuntimeStatistics
+ | project QueryText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner topRequestsByRUcharge on ActivityId
+ | project DatabaseName , CollectionName , QueryText , RequestCharge, TimeGenerated
+ | order by RequestCharge desc
+ | take 10
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let topRequestsByRUcharge = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and TimeGenerated > ago(24h)
+ | project requestCharge_s , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ | project querytext_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner topRequestsByRUcharge on activityId_g
+ | project databasename_s , collectionname_s , querytext_s , requestCharge_s, TimeGenerated
+ | order by requestCharge_s desc
+ | take 10
+ ```
++
+- Requests throttled (statusCode = 429) in a given time window
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "429"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBQueryRuntimeStatistics
+ | project QueryText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , QueryText , OperationName, TimeGenerated
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and statusCode_s == "429"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ | project querytext_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , querytext_s , OperationName, TimeGenerated
+ ```
++
+- Queries with the largest response lengths (payload size of the server response)
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let operationsbyUserAgent = CDBDataPlaneRequests
+ | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
+ CDBQueryRuntimeStatistics
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on ActivityId
+ | summarize max(ResponseLength) by QueryText
+ | order by max_ResponseLength desc
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let operationsbyUserAgent = AzureDiagnostics
+ | where Category=="DataPlaneRequests"
+ | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
+ AzureDiagnostics
+ | where Category == "QueryRuntimeStatistics"
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on activityId_g
+ | summarize max(responseLength_s1) by querytext_s
+ | order by max_responseLength_s1 desc
+ ```
++
+- RU Consumption by physical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
++
+- RU Consumption by logical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
++
+## Next steps
+* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](cosmosdb-monitor-resource-logs.md) article.
+
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Cosmosdb Monitor Logs Basic Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-monitor-logs-basic-queries.md
In this article, we'll cover how to write simple queries to help troubleshoot is
For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query.
-For resource-specific tables (currently in preview for SQL API), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+For resource-specific tables, data is written into individual tables for each category of the resource (not available for table API). We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
## <a id="azure-diagnostics-queries"></a> AzureDiagnostics Queries
For resource-specific tables (currently in preview for SQL API), data is written
| summarize by OperationName ```
-## Next steps
+## Next steps
* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](cosmosdb-monitor-resource-logs.md) article. * For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Cosmosdb Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-monitor-resource-logs.md
Platform metrics and the Activity logs are collected automatically, whereas you
- Storage Account > [!NOTE]
-> For SQL API accounts, we recommend creating the diagnostic setting in resource specific mode [following our instructions for creating diagnostics setting via REST API](cosmosdb-monitor-resource-logs.md#create-diagnostic-setting). This option provides additional cost-optimizations with an improved view for handling data.
+> We recommend creating the diagnostic setting in resource-specific mode (for all APIs except Table API) [following our instructions for creating diagnostics setting via REST API](cosmosdb-monitor-resource-logs.md#create-diagnostic-setting). This option provides additional cost-optimizations with an improved view for handling data.
-## Create using the Azure portal
+## <a id="create-setting-portal"></a> Create diagnostics settings via the Azure portal
1. Sign into the [Azure portal](https://portal.azure.com).
-1. Navigate to your Azure Cosmos account. Open the **Diagnostic settings** pane, and then select **Add diagnostic setting** option.
+2. Navigate to your Azure Cosmos account. Open the **Diagnostic settings** pane under the **Monitoring section**, and then select **Add diagnostic setting** option.
-2. In the **Diagnostic settings** pane, fill the form with your preferred categories.
+ :::image type="content" source="./media/monitor-cosmos-db/diagnostics-settings-selection.png" alt-text="Select diagnostics":::
-### Choosing Log Categories
-|Category |API | Definition | Key Properties |
-|||||
-|DataPlaneRequests | All APIs | Logs back-end requests as data plane operations which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
-|MongoRequests | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
-|CassandraRequests | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
-|GremlinRequests | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
-|QueryRuntimeStatistics | SQL | This table details query operations executed against a SQL API account. By default, the query text and its parameters are obfuscated to avoid logging PII data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
-|PartitionKeyStatistics | All APIs | Logs the statistics of logical partition keys by representing the storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
-|PartitionKeyRUConsumption | SQL API | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for SQL API accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
-|ControlPlaneRequests | All APIs | Logs details on control plane operations i.e. creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
-|TableApiRequests | Table API | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+3. In the **Diagnostic settings** pane, fill the form with your preferred categories.
+### Choose log categories
+
+ |Category |API | Definition | Key Properties |
+ |||||
+ |DataPlaneRequests | All APIs | Logs back-end requests as data plane operations which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
+ |MongoRequests | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
+ |CassandraRequests | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ |GremlinRequests | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
+ |QueryRuntimeStatistics | SQL | This table details query operations executed against a SQL API account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
+ |PartitionKeyStatistics | All APIs | Logs the statistics of logical partition keys by representing the storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following condition are true: <br/><ul><li> At least 1% of the documents have same logical partition key. </li><li> There are at least 100 such keys partition keys. </li><li> Out of all the keys, the top 3 keys with largest storage size are captured by the PartitionKeyStatistics log. </li></ul> If the previous conditions are not met, the partition key statistics data is not available. It's okay if the above conditions are not met for your account. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
+ |PartitionKeyRUConsumption | SQL API | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for SQL API accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
+ |ControlPlaneRequests | All APIs | Logs details on control plane operations i.e. creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
+ |TableApiRequests | Table API | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+
+4. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
+
+ :::image type="content" source="./media/monitor-cosmos-db/diagnostics-resource-specific.png" alt-text="Select enable resource-specific":::
## <a id="create-diagnostic-setting"></a> Create diagnostic setting via REST API
-Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorupdate)
- for creating a diagnostic setting via the interactive console.
+Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorupdate) for creating a diagnostic setting via the interactive console.
> [!Note]
-> If you are using SQL API, we recommend setting the **logAnalyticsDestinationType** property to **Dedicated** for enabling resource specific tables.
+> We recommend setting the **logAnalyticsDestinationType** property to **Dedicated** for enabling resource specific tables.
### Request
-```HTTP
-PUT
-https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnosticSettings/service?api-version={api-version}
-```
+ ```HTTP
+ PUT
+ https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnosticSettings/service?api-version={api-version}
+ ```
### Headers
-|Parameters/Headers | Value/Description |
-|||
-|name | The name of your Diagnostic setting. |
-|resourceUri | subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME} |
-|api-version | 2017-05-01-preview |
-|Content-Type | application/json |
+ |Parameters/Headers | Value/Description |
+ |||
+ |name | The name of your Diagnostic setting. |
+ |resourceUri | subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME} |
+ |api-version | 2017-05-01-preview |
+ |Content-Type | application/json |
### Body
Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-se
> [!Note] > If you are using SQL API, we recommend setting the **export-to-resource-specific** property to **true**.
-```azurecli-interactive
-az monitor diagnostic-settings create --resource /subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{RESOURCE_NAME} --name {DIAGNOSTIC_SETTING_NAME} --export-to-resource-specific true --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' --workspace /subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
-```
+ ```azurecli-interactive
+ az monitor diagnostic-settings create --resource /subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/ --name {DIAGNOSTIC_SETTING_NAME} --export-to-resource-specific true --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' --workspace /subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
+ ```
+## <a id="full-text-query"></a> Enable full-text query for logging query text
+
+> [!Note]
+> Enabling this feature may result in additional logging costs, for pricing details visit [Azure Monitor pricing] (https://azure.microsoft.com/pricing/details/monitor/). It is recommended to disable this feature after troubleshooting.
+
+Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, youΓÇÖll be able to view the deobfuscated query for all requests within your Azure Cosmos DB account. YouΓÇÖll also give permission for Azure Cosmos DB to access and surface this data in your logs.
+
+1. To enable this feature, navigate to the `Features` blade in your Cosmos DB account.
+
+ :::image type="content" source="./media/monitor-cosmos-db/full-text-query-features.png" alt-text="Navigate to Features blade":::
+
+2. Select `Enable`, this setting will then be applied in the within the next few minutes. All newly ingested logs will have the full-text or PIICommand text for each request.
+
+ :::image type="content" source="./media/monitor-cosmos-db/select-enable-full-text.png" alt-text="Select enable full-text":::
+
+To learn how to query using this newly enabled feature visit [advanced queries](cosmos-db-advanced-queries.md).
## Next steps
-* For more information on how to query resource-specific tables see [troubleshooting using resource specific tables](cosmosdb-monitor-logs-basic-queries.md#resource-specific-queries).
+* For more information on how to query resource-specific tables see [troubleshooting using resource-specific tables](cosmosdb-monitor-logs-basic-queries.md#resource-specific-queries).
* For more information on how to query AzureDiagnostics tables see [troubleshooting using AzureDiagnostics tables](cosmosdb-monitor-logs-basic-queries.md#azure-diagnostics-queries).
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/linux-emulator.md
Since the Azure Cosmos DB Emulator provides an emulated environment that runs on
- The Linux emulator is not a scalable service and it doesn't support a large number of containers. When using the Azure Cosmos DB Emulator, by default, you can create up to 10 fixed size containers at 400 RU/s (only supported using Azure Cosmos DB SDKs), or 5 unlimited containers. For more information on how to change this value, see [Set the PartitionCount value](emulator-command-line-parameters.md#set-partitioncount) article. -- While [consistency levels](consistency-levels.md) like the cloud service does. can be adjusted using command-line arguments for testing scenarios only (default setting is Session), a user might not expect the same behavior as in the cloud service. For instance, Strong and Bounded staleness consistency has no effect on the emulator, other than signaling to the Cosmos DB SDK the default consistency of the account.
+- While [consistency levels](consistency-levels.md) can be adjusted using command-line arguments for testing scenarios only (default setting is Session), a user might not expect the same behavior as in the cloud service. For instance, Strong and Bounded staleness consistency has no effect on the emulator, other than signaling to the Cosmos DB SDK the default consistency of the account.
- The Linux emulator does not offer [multi-region replication](distribute-data-globally.md).
cosmos-db Queries Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/queries-cassandra.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries for Cassandra API
+
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Cassandra API
++++ Last updated : 06/12/2021+++
+# Troubleshoot issues with advanced diagnostics queries for Cassandra API
++
+> [!div class="op_single_selector"]
+> * [SQL (Core) API](cosmos-db-advanced-queries.md)
+> * [MongoDB API](queries-mongo.md)
+> * [Cassandra API](queries-cassandra.md)
+> * [Gremlin API](queries-gremlin.md)
++
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+
+For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+
+## Common queries
+
+- Top N(10) RU consuming requests/queries in a given time frame
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let topRequestsByRUcharge = CDBDataPlaneRequests
+ | where TimeGenerated > ago(24h)
+ | project RequestCharge , TimeGenerated, ActivityId;
+ CDBCassandraRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner topRequestsByRUcharge on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , RequestCharge, TimeGenerated
+ | order by RequestCharge desc
+ | take 10
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let topRequestsByRUcharge = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and TimeGenerated > ago(1h)
+ | project requestCharge_s , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "CassandraRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner topRequestsByRUcharge on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated
+ | order by requestCharge_s desc
+ | take 10
+ ```
++
+- Requests throttled (statusCode = 429) in a given time window
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "429"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBCassandraRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests"
+ | where statusCode_s == "429"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "CassandraRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
+ ```
++
+- Queries with large response lengths (payload size of the server response)
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let operationsbyUserAgent = CDBDataPlaneRequests
+ | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
+ CDBCassandraRequests
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on ActivityId
+ | summarize max(ResponseLength) by PIICommandText
+ | order by max_ResponseLength desc
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let operationsbyUserAgent = AzureDiagnostics
+ | where Category=="DataPlaneRequests"
+ | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
+ AzureDiagnostics
+ | where Category == "CassandraRequests"
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on activityId_g
+ | summarize max(responseLength_s1) by piiCommandText_s
+ | order by max_responseLength_s1 desc
+ ```
++
+- RU Consumption by physical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
++
+- RU Consumption by logical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
++
+## Next steps
+* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](cosmosdb-monitor-resource-logs.md) article.
+
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Queries Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/queries-gremlin.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries for Gremlin API
+
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Gremlin API
++++ Last updated : 06/12/2021+++
+# Troubleshoot issues with advanced diagnostics queries for Gremlin API
++
+> [!div class="op_single_selector"]
+> * [SQL (Core) API](cosmos-db-advanced-queries.md)
+> * [MongoDB API](queries-mongo.md)
+> * [Cassandra API](queries-cassandra.md)
+> * [Gremlin API](queries-gremlin.md)
+>
+
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+
+For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+
+## Common queries
+
+- Top N(10) RU consuming requests/queries in a given time frame
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let topRequestsByRUcharge = CDBDataPlaneRequests
+ | where TimeGenerated > ago(24h)
+ | project RequestCharge , TimeGenerated, ActivityId;
+ CDBGremlinRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner topRequestsByRUcharge on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , RequestCharge, TimeGenerated
+ | order by RequestCharge desc
+ | take 10
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let topRequestsByRUcharge = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and TimeGenerated > ago(1h)
+ | project requestCharge_s , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner topRequestsByRUcharge on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated
+ | order by requestCharge_s desc
+ | take 10
+ ```
++
+- Requests throttled (statusCode = 429) in a given time window
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "429"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBGremlinRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests"
+ | where statusCode_s == "429"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
+ ```
++
+- Queries with large response lengths (payload size of the server response)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let operationsbyUserAgent = CDBDataPlaneRequests
+ | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
+ CDBGremlinRequests
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on ActivityId
+ | summarize max(ResponseLength) by PIICommandText
+ | order by max_ResponseLength desc
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let operationsbyUserAgent = AzureDiagnostics
+ | where Category=="DataPlaneRequests"
+ | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on activityId_g
+ | summarize max(responseLength_s1) by piiCommandText_s
+ | order by max_responseLength_s1 desc
+ ```
++
+- RU Consumption by physical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
++
+- RU Consumption by logical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
++
+## Next steps
+* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](cosmosdb-monitor-resource-logs.md) article.
+
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Queries Mongo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/queries-mongo.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries for Mongo API
+
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Mongo API
++++ Last updated : 06/12/2021+++
+# Troubleshoot issues with advanced diagnostics queries for Mongo API
++
+> [!div class="op_single_selector"]
+> * [SQL (Core) API](cosmos-db-advanced-queries.md)
+> * [MongoDB API](queries-mongo.md)
+> * [Cassandra API](queries-cassandra.md)
+> * [Gremlin API](queries-gremlin.md)
+>
+
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+
+For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+
+## Common queries
+
+- Top N(10) RU consuming requests/queries in a given time frame
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let topRequestsByRUcharge = CDBDataPlaneRequests
+ | where TimeGenerated > ago(24h)
+ | project RequestCharge , TimeGenerated, ActivityId;
+ CDBMongoRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner topRequestsByRUcharge on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , RequestCharge, TimeGenerated
+ | order by RequestCharge desc
+ | take 10
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let topRequestsByRUcharge = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and TimeGenerated > ago(1h)
+ | project requestCharge_s , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "MongoRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner topRequestsByRUcharge on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated
+ | order by requestCharge_s desc
+ | take 10
+ ```
++
+- Requests throttled (statusCode = 429 or 16500) in a given time window
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "429" or StatusCode == "16500"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBMongoRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests"
+ | where statusCode_s == "429" or statusCode_s == "16500"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "MongoRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
+ ```
++
+- Timed out requests (statusCode = 50) in a given time window
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "50"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBMongoRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests"
+ | where statusCode_s == "50"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "MongoRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
+ ```
++
+- Queries with large response lengths (payload size of the server response)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let operationsbyUserAgent = CDBDataPlaneRequests
+ | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
+ CDBMongoRequests
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on ActivityId
+ | summarize max(ResponseLength) by PIICommandText
+ | order by max_ResponseLength desc
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let operationsbyUserAgent = AzureDiagnostics
+ | where Category=="DataPlaneRequests"
+ | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
+ AzureDiagnostics
+ | where Category == "MongoRequests"
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on activityId_g
+ | summarize max(responseLength_s1) by piiCommandText_s
+ | order by max_responseLength_s1 desc
+ ```
++
+- RU Consumption by physical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
++
+- RU Consumption by logical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
++
+## Next steps
+* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](cosmosdb-monitor-resource-logs.md) article.
+
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 05/25/2021 Last updated : 06/22/2021
You get the subscriptionId as part of the response from the command.
The previous section showed how to create a subscription with PowerShell, CLI, or REST API. If you need to automate creating subscriptions, consider using an Azure Resource Manager template (ARM template).
-The following template creates a subscription. For `billingScope`, provide the enrollment account ID. For `targetManagementGroup`, provide the management group where you want to create the subscription.
+The following template creates a subscription. For `billingScope`, provide the enrollment account ID. The subscription is created in the root management group. After creating the subscription, you can move it to another management group.
```json {
The following template creates a subscription. For `billingScope`, provide the e
"metadata": { "description": "Provide the full resource ID of billing scope to use for subscription creation." }
- },
- "targetManagementGroup": {
- "type": "string",
- "metadata": {
- "description": "Provide the ID of the target management group to place the subscription."
- }
} }, "resources": [
The following template creates a subscription. For `billingScope`, provide the e
"properties": { "workLoad": "Production", "displayName": "[parameters('subscriptionAliasName')]",
- "billingScope": "[parameters('billingScope')]",
- "managementGroupId": "[tenantResourceId('Microsoft.Management/managementGroups/', parameters('targetManagementGroup'))]"
+ "billingScope": "[parameters('billingScope')]"
} } ],
With a request body:
}, "billingScope": { "value": "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"
- },
- "targetManagementGroup": {
- "value": "mg2"
} }, "mode": "Incremental"
New-AzManagementGroupDeployment `
-ManagementGroupId mg1 ` -TemplateFile azuredeploy.json ` -subscriptionAliasName sampleAlias `
- -billingScope "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321" `
- -targetManagementGroup mg2
+ -billingScope "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"
``` ### [Azure CLI](#tab/azure-cli)
az deployment mg create \
--location eastus \ --management-group-id mg1 \ --template-file azuredeploy.json \
- --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321' targetManagementGroup=mg2
+ --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321'
```
+To move a subscription to a new management group, use the following template.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "targetMgId": {
+ "type": "string",
+ "metadata": {
+ "description": "Provide the ID of the management group that you want to move the subscription to."
+ }
+ },
+ "subscriptionId": {
+ "type": "string",
+ "metadata": {
+ "description": "Provide the ID of the existing subscription to move."
+ }
+ }
+ },
+ "resources": [
+ {
+ "scope": "/",
+ "type": "Microsoft.Management/managementGroups/subscriptions",
+ "apiVersion": "2020-05-01",
+ "name": "[concat(parameters('targetMgId'), '/', parameters('subscriptionId'))]",
+ "properties": {
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+ ## Limitations of Azure Enterprise subscription creation API - Only Azure Enterprise subscriptions are created using the API.
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
Previously updated : 03/29/2021 Last updated : 06/22/2021
To install the latest version of the module that contains the `New-AzSubscriptio
Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscription) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`. ```azurepowershell
-New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" -Workload 'Production"
+New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" -Workload "Production"
``` You get the subscriptionId as part of the response from the command.
You get the subscriptionId as part of the response from the command.
The previous section showed how to create a subscription with PowerShell, CLI, or REST API. If you need to automate creating subscriptions, consider using an Azure Resource Manager template (ARM template).
-The following template creates a subscription. For `billingScope`, provide the invoice section ID. For `targetManagementGroup`, provide the management group where you want to create the subscription.
+The following template creates a subscription. For `billingScope`, provide the invoice section ID. The subscription is created in the root management group. After creating the subscription, you can move it to another management group.
```json {
The following template creates a subscription. For `billingScope`, provide the i
"metadata": { "description": "Provide the full resource ID of billing scope to use for subscription creation." }
- },
- "targetManagementGroup": {
- "type": "string",
- "metadata": {
- "description": "Provide the ID of the target management group to place the subscription."
- }
} }, "resources": [
The following template creates a subscription. For `billingScope`, provide the i
"properties": { "workLoad": "Production", "displayName": "[parameters('subscriptionAliasName')]",
- "billingScope": "[parameters('billingScope')]",
- "managementGroupId": "[tenantResourceId('Microsoft.Management/managementGroups/', parameters('targetManagementGroup'))]"
+ "billingScope": "[parameters('billingScope')]"
} } ],
With a request body:
}, "billingScope": { "value": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"
- },
- "targetManagementGroup": {
- "value": "mg2"
} }, "mode": "Incremental"
New-AzManagementGroupDeployment `
-ManagementGroupId mg1 ` -TemplateFile azuredeploy.json ` -subscriptionAliasName sampleAlias `
- -billingScope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" `
- -targetManagementGroup mg2
+ -billingScope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"
``` ### [Azure CLI](#tab/azure-cli)
az deployment mg create \
--location eastus \ --management-group-id mg1 \ --template-file azuredeploy.json \
- --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx' targetManagementGroup=mg2
+ --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx'
```
+To move a subscription to a new management group, use the following template.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "targetMgId": {
+ "type": "string",
+ "metadata": {
+ "description": "Provide the ID of the management group that you want to move the subscription to."
+ }
+ },
+ "subscriptionId": {
+ "type": "string",
+ "metadata": {
+ "description": "Provide the ID of the existing subscription to move."
+ }
+ }
+ },
+ "resources": [
+ {
+ "scope": "/",
+ "type": "Microsoft.Management/managementGroups/subscriptions",
+ "apiVersion": "2020-05-01",
+ "name": "[concat(parameters('targetMgId'), '/', parameters('subscriptionId'))]",
+ "properties": {
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+ ## Next steps * Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md).
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
Previously updated : 03/12/2021 Last updated : 06/22/2021
Pass the optional *resellerId* copied from the second step in the `az account al
The previous section showed how to create a subscription with PowerShell, CLI, or REST API. If you need to automate creating subscriptions, consider using an Azure Resource Manager template (ARM template).
-The following template creates a subscription. For `billingScope`, provide the customer ID. For `targetManagementGroup`, provide the management group where you want to create the subscription.
+The following template creates a subscription. For `billingScope`, provide the customer ID. The subscription is created in the root management group. After creating the subscription, you can move it to another management group.
```json {
The following template creates a subscription. For `billingScope`, provide the c
"metadata": { "description": "Provide the full resource ID of billing scope to use for subscription creation." }
- },
- "targetManagementGroup": {
- "type": "string",
- "metadata": {
- "description": "Provide the ID of the target management group to place the subscription."
- }
} }, "resources": [
The following template creates a subscription. For `billingScope`, provide the c
"properties": { "workLoad": "Production", "displayName": "[parameters('subscriptionAliasName')]",
- "billingScope": "[parameters('billingScope')]",
- "managementGroupId": "[tenantResourceId('Microsoft.Management/managementGroups/', parameters('targetManagementGroup'))]"
+ "billingScope": "[parameters('billingScope')]"
} } ],
With a request body:
}, "billingScope": { "value": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- },
- "targetManagementGroup": {
- "value": "mg2"
} }, "mode": "Incremental"
New-AzManagementGroupDeployment `
-ManagementGroupId mg1 ` -TemplateFile azuredeploy.json ` -subscriptionAliasName sampleAlias `
- -billingScope "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `
- -targetManagementGroup mg2
+ -billingScope "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
``` ### [Azure CLI](#tab/azure-cli)
az deployment mg create \
--location eastus \ --management-group-id mg1 \ --template-file azuredeploy.json \
- --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx' targetManagementGroup=mg2
+ --parameters subscriptionAliasName='sampleAlias' billingScope='/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
```
+To move a subscription to a new management group, use the following template.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "targetMgId": {
+ "type": "string",
+ "metadata": {
+ "description": "Provide the ID of the management group that you want to move the subscription to."
+ }
+ },
+ "subscriptionId": {
+ "type": "string",
+ "metadata": {
+ "description": "Provide the ID of the existing subscription to move."
+ }
+ }
+ },
+ "resources": [
+ {
+ "scope": "/",
+ "type": "Microsoft.Management/managementGroups/subscriptions",
+ "apiVersion": "2020-05-01",
+ "name": "[concat(parameters('targetMgId'), '/', parameters('subscriptionId'))]",
+ "properties": {
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+ ## Next steps * Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md).
data-factory Compute Optimized Retire https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-optimized-retire.md
Azure Data Factory and Azure Synapse Analytics data flows provide a low-code mec
| Memory Optimized Data Flows | Best performing runtime for data flows when working with large datasets and many calculations | | Compute Optimized Data Flows | Not recommended for production workloads |
-## Migration steps
+## Migration steps
-Your Compute Optimized data flows will continue to work in pipelines as-is. However, new Azure Integration Runtimes and data flow activities will not be able to use Compute Optimized. When creating a new data flow activity:
+From now through 31 August 2024, your Compute Optimized data flows will continue to work in your existing pipelines. To avoid service disruption, please remove your existing Compute Optimized data flows before 31 August 2024 and follow the steps below to create a new Azure Integration Runtime and data flow activity. When creating a new data flow activity:
1. Create a new Azure Integration Runtime with ΓÇ£General PurposeΓÇ¥ or ΓÇ£Memory OptimizedΓÇ¥ as the compute type. 2. Set your data flow activity using either of those compute types.
data-factory Concepts Data Flow Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-monitoring.md
Previously updated : 06/11/2021 Last updated : 06/18/2021 # Monitor Data Flows [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-After you have completed building and debugging your data flow, you want to schedule your data flow to execute on a schedule within the context of a pipeline. You can schedule the pipeline from Azure Data Factory using Triggers. Or you can use the Trigger Now option from the Azure Data Factory Pipeline Builder to execute a single-run execution to test your data flow within the pipeline context.
+After you have completed building and debugging your data flow, you want to schedule your data flow to execute on a schedule within the context of a pipeline. You can schedule the pipeline from Azure Data Factory using Triggers. For testing and debugging you data flow from a pipeline, you can use the Debug button on the toolbar ribbon or Trigger Now option from the Azure Data Factory Pipeline Builder to execute a single-run execution to test your data flow within the pipeline context.
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4P5pV]
When you execute your pipeline, you can monitor the pipeline and all of the activities contained in the pipeline including the Data Flow activity. Click on the monitor icon in the left-hand Azure Data Factory UI panel. You can see a screen similar to the one below. The highlighted icons allow you to drill into the activities in the pipeline, including the Data Flow activity.
You see statistics at this level as well including the run times and status. The
![Screenshot shows the eyeglasses icon to see details of data flow execution.](media/data-flow/monitoring-details.png "Data Flow Monitoring")
-When you're in the graphical node monitoring view, you can see a simplified view-only version of your data flow graph.
+When you're in the graphical node monitoring view, you can see a simplified view-only version of your data flow graph. To see the details view with larger graph nodes that include transformation stage labels, use the zoom slider on the right side of your canvas. You can also use the search button on the right side to find parts of your data flow logic in the graph.
![Screenshot shows the view-only version of the graph.](media/data-flow/mon003.png "Data Flow Monitoring")
When your Data Flow is executed in Spark, Azure Data Factory determines optimal
* When you select individual transformations, you receive additional feedback on the right-hand panel that shows partition stats, column counts, skewness (how evenly is the data distributed across partitions), and kurtosis (how spiky is the data).
+* Sorting by *processing time* will help you to identify which stages in your data flow took the most time.
+
+* To find which transformations inside each stage took the most time, sort on *highest processing time*.
+
+* The *rows written* is also sortable as a way to identify which streams inside your data flow are writing the most data.
+ * When you select the Sink in the node view, you can see column lineage. There are three different methods that columns are accumulated throughout your data flow to land in the Sink. They are: * Computed: You use the column for conditional processing or within an expression in your data flow, but don't land it in the Sink
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-for-each-activity.md
Property | Description | Allowed values | Required
-- | -- | -- | -- name | Name of the for-each activity. | String | Yes type | Must be set to **ForEach** | String | Yes
-isSequential | Specifies whether the loop should be executed sequentially or in parallel. Maximum of 20 loop iterations can be executed at once in parallel). For example, if you have a ForEach activity iterating over a copy activity with 10 different source and sink datasets with **isSequential** set to False, all copies are executed at once. Default is False. <br/><br/> If "isSequential" is set to False, ensure that there is a correct configuration to run multiple executables. Otherwise, this property should be used with caution to avoid incurring write conflicts. For more information, see [Parallel execution](#parallel-execution) section. | Boolean | No. Default is False.
+isSequential | Specifies whether the loop should be executed sequentially or in parallel. Maximum of 50 loop iterations can be executed at once in parallel). For example, if you have a ForEach activity iterating over a copy activity with 10 different source and sink datasets with **isSequential** set to False, all copies are executed at once. Default is False. <br/><br/> If "isSequential" is set to False, ensure that there is a correct configuration to run multiple executables. Otherwise, this property should be used with caution to avoid incurring write conflicts. For more information, see [Parallel execution](#parallel-execution) section. | Boolean | No. Default is False.
batchCount | Batch count to be used for controlling the number of parallel execution (when isSequential is set to false). This is the upper concurrency limit, but the for-each activity will not always execute at this number | Integer (maximum 50) | No. Default is 20. Items | An expression that returns a JSON Array to be iterated over. | Expression (which returns a JSON Array) | Yes Activities | The activities to be executed. | List of Activities | Yes ## Parallel execution
-If **isSequential** is set to false, the activity iterates in parallel with a maximum of 20 concurrent iterations. This setting should be used with caution. If the concurrent iterations are writing to the same folder but to different files, this approach is fine. If the concurrent iterations are writing concurrently to the exact same file, this approach most likely causes an error.
+If **isSequential** is set to false, the activity iterates in parallel with a maximum of 50 concurrent iterations. This setting should be used with caution. If the concurrent iterations are writing to the same folder but to different files, this approach is fine. If the concurrent iterations are writing concurrently to the exact same file, this approach most likely causes an error.
## Iteration expression language In the ForEach activity, provide an array to be iterated over for the property **items**." Use `@item()` to iterate over a single enumeration in ForEach activity. For example, if **items** is an array: [1, 2, 3], `@item()` returns 1 in the first iteration, 2 in the second iteration, and 3 in the third iteration. You can also use `@range(0,10)` like expression to iterate ten times starting with 0 ending with 9.
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-switch-activity.md
Previously updated : 10/08/2019 Last updated : 06/23/2021
The Switch activity provides the same functionality that a switch statement provides in programming languages. It evaluates a set of activities corresponding to a case that matches the condition evaluation.
+> [!NOTE]
+> This section provides JSON definitions of Switch activity. Expressions for Switch, Cases etc. that evaluate to string should not contain '.' character which is a reserved character.
+>
## Syntax ```json
data-factory Data Factory Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-tutorials.md
Below is a list of tutorials to help explain and walk through a series of Data F
[Best practices for lake data in ADLS Gen2](tutorial-data-flow-write-to-lake.md)
+[Dynamically set column names](data-flow-tutorials.md)
+ ## External data services [Azure Databricks notebook activity](transform-data-using-databricks-notebook.md)
data-factory Data Flow Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-tutorials.md
As updates are constantly made to the product, some features have added or diffe
[Debugging workflows for data flows](https://youtu.be/y3suL7UsWVw)
+[Updated monitoring view](https://www.youtube.com/watch?v=FWCBslsk6KE)
+ ## Transformation overviews [Aggregate transformation](http://youtu.be/jdL75xIr98I)
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-schedule-trigger.md
You can create a **schedule trigger** to schedule a pipeline to run periodically
> For time zones that observe daylight saving, trigger time will auto-adjust for the twice a year change. To opt out of the daylight saving change, please select a time zone that does not observe daylight saving, for instance UTC 1. Specify **Recurrence** for the trigger. Select one of the values from the drop-down list (Every minute, Hourly, Daily, Weekly, and Monthly). Enter the multiplier in the text box. For example, if you want the trigger to run once for every 15 minutes, you select **Every Minute**, and enter **15** in the text box.
+ 1. In the recurrence part, if you choose "Day(s), Week(s) or Month(s)" from the drop-down, you can find "Advanced recurrence options".
+ :::image type="content" source="./media/how-to-create-schedule-trigger/advanced.png" alt-text="Advanced recurrence options of Day(s), Week(s) or Month(s)":::
1. To specify an end date time, select **Specify an End Date**, and specify _Ends On_, then select **OK**. There is a cost associated with each pipeline run. If you are testing, you may want to ensure that the pipeline is triggered only a couple of times. However, ensure that there is enough time for the pipeline to run between the publish time and the end time. The trigger comes into effect only after you publish the solution to Data Factory, not when you save the trigger in the UI. ![Trigger settings](./media/how-to-create-schedule-trigger/trigger-settings-01.png)
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Title: Troubleshoot pipeline orchestration and triggers in Azure Data Factory
description: Use different methods to troubleshoot pipeline trigger issues in Azure Data Factory. Previously updated : 06/17/2021 Last updated : 04/01/2021
This can happen if you have not scaled up SHIR as per your workload.
Long queue related error messages can appear for various reasons. - **Resolution** * If you receive an error message from any source or destination via connectors, which can generate a long queue, go to [Connector Troubleshooting Guide.](./connector-troubleshoot-guide.md) * If you receive an error message about Mapping Data Flow, which can generate a long queue, go to [Data Flows Troubleshooting Guide.](./data-flow-troubleshoot-guide.md)
It is an user error because JSON payload that hits management.azure.com is corru
Perform network tracing of your API call from ADF portal using Edge/Chrome browser **Developer tools**. You will see offending JSON payload, which could be due to a special characters(for example $), spaces and other types of user input. Once you fix the string expression, you will proceed with rest of ADF usage calls in the browser.
-### Unable to publish event trigger with Access Denied failure
-
-**Cause**
-
-When the Azure account lacks the required access via a role membership, it unable to access to the storage account used for the trigger.
-
-**Resolution**
-
-The Azure account needs to be assigned to a role with sufficient permissions in the storage account's access control (IAM) for the event trigger publish to succeed. The role can be the Owner role, Contributor role, or any custom role with the **Microsoft.EventGrid/EventSubscriptions/Write** permission to the storage account.
-
-[Role-based access control for an event trigger](./how-to-create-event-trigger.md#role-based-access-control)
-[Storage Event Trigger - Permission and RBAC setting](https://techcommunity.microsoft.com/t5/azure-data-factory/storage-event-trigger-permission-and-rbac-setting/ba-p/2101782)
-
-### ForEach activities do not run in parallel mode
-
-**Cause**
-
-You are running ADF in debug mode.
-
-**Resolution**
-
-Please run pipeline in trigger mode.
-
-### Can not publish because account is locked
+### Expression builder fails to load
**Cause**
-You made changes in collaboration branch to remove storage event trigger. You are trying to publish and encounter "Trigger deactivation error" message. This is due to the storage account, used for the event trigger, is being locked.
+The expression builder can fail to load due to network or cache problems with the web browser.
**Resolution**
-Remove the lock to allow publish to succeed.
+Upgrade the web browser to the latest version of a supported browser, clear cookies for the site, and refresh the page.
## Next steps
data-factory Tutorial Data Flow Dynamic Columns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-dynamic-columns.md
+
+ Title: Dynamically set column names in data flows
+description: This tutorial provides steps for dynamically setting column names in data flows
+++++ Last updated : 06/17/2021++
+# Dynamically set column names in data flows
++
+Many times, when processing data for ETL jobs, you will need to change the column names before writing the results. Sometimes this is needed to align column names to a well-known target schema. Other times, you may need to set column names at runtime based on evolving schemas. In this tutorial, you'll learn how to use data flows to set column names for your destination files and database tables dynamically using external configuration files and parameters.
+
+If you're new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md).
+
+## Prerequisites
+* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+* **Azure storage account**. You use ADLS storage as a *source* and *sink* data stores. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md) for steps to create one.
+
+## Create a data factory
+
+In this step, you create a data factory and open the Data Factory UX to create a pipeline in the data factory.
+
+1. Open **Microsoft Edge** or **Google Chrome**. Currently, Data Factory UI is supported only in the Microsoft Edge and Google Chrome web browsers.
+1. On the left menu, select **Create a resource** > **Integration** > **Data Factory**
+1. On the **New data factory** page, under **Name**, enter **ADFTutorialDataFactory**
+1. Select the Azure **subscription** in which you want to create the data factory.
+1. For **Resource Group**, take one of the following steps:
+ * Select **Use existing**, and select an existing resource group from the drop-down list.
+ * Select **Create new**, and enter the name of a resource group.To learn about resource groups, see [Use resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
+1. Under **Version**, select **V2**.
+1. Under **Location**, select a location for the data factory. Only locations that are supported are displayed in the drop-down list. Data stores (for example, Azure Storage and SQL Database) and computes (for example, Azure HDInsight) used by the data factory can be in other regions.
+1. Select **Create**.
+1. After the creation is finished, you see the notice in Notifications center. Select **Go to resource** to navigate to the Data factory page.
+1. Select **Author & Monitor** to launch the Data Factory UI in a separate tab.
+
+## Create a pipeline with a data flow activity
+
+In this step, you'll create a pipeline that contains a data flow activity.
+
+1. From the ADF home page, select **Create pipeline**.
+1. In the **General** tab for the pipeline, enter **DeltaLake** for **Name** of the pipeline.
+1. In the factory top bar, slide the **Data Flow debug** slider on. Debug mode allows for interactive testing of transformation logic against a live Spark cluster. Data Flow clusters take 5-7 minutes to warm up and users are recommended to turn on debug first if they plan to do Data Flow development. For more information, see [Debug Mode](concepts-data-flow-debug-mode.md).
+
+ ![Data Flow Activity](media/tutorial-data-flow/dataflow1.png)
+1. In the **Activities** pane, expand the **Move and Transform** accordion. Drag and drop the **Data Flow** activity from the pane to the pipeline canvas.
+
+ ![Screenshot that shows the pipeline canvas where you can drop the Data Flow activity.](media/tutorial-data-flow/activity1.png)
+1. In the **Adding Data Flow** pop-up, select **Create new Data Flow** and then name your data flow **DynaCols**. Click Finish when done.
+
+## Build dynamic column mapping in data flows
+
+For this tutorial, we're going to use a sample movies rating file and renaming a few of the fields in the source to a new set of target columns that can change over time. The datasets you'll create below should point to this movies CSV file in your Blob Storage or ADLS Gen2 storage account. [Download the movies file here](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/moviesDB.csv) and store the file in your Azure storage account.
+
+![Final flow](media/data-flow/dynacols-1.png "Final flow")
+
+### Tutorial objectives
+
+You'll learn how to dynamically set column names using a data flow
+
+1. Create a source dataset for the movies CSV file.
+1. Create a lookup dataset for a field mapping JSON configuration file.
+1. Convert the columns from the source to your target column names.
+
+### Start from a blank data flow canvas
+
+First, let's set up the data flow environment for each of the mechanisms described below for landing data in ADLS Gen2.
+
+1. Click on the source transformation and call it ```movies1```.
+1. Click the new button next to dataset in the bottom panel.
+1. Choose either Blob or ADLS Gen2 depending on where you stored the moviesDB.csv file from above.
+1. Add a 2nd source, which we will use to source the configuration JSON file to lookup field mappings.
+1. Call this as ```columnmappings```.
+1. For the dataset, point to a new JSON file that will store a configuration for column mapping. You can paste the into the JSON file for this tutorial example:
+ ```
+ [
+ {"prevcolumn":"title","newcolumn":"movietitle"},
+ {"prevcolumn":"year","newcolumn":"releaseyear"}
+ ]
+ ```
+
+1. Set this source settings to ```array of documents```.
+1. Add a 3rd source and call it ```movies2```. Configure this exactly the same as ```movies1```.
+
+### Parameterized column mapping
+
+In this first scenario, you will set output column names in you data flow by setting the column mapping based on matching incoming fields with a parameter that is a string array of columns and match each array index with the incoming column ordinal position. When executing this data flow from a pipeline, you will be able to set different column names on each pipeline execution by sending in this string array parameter to the data flow activity.
+
+![Parameters](media/data-flow/dynacols-3.png "Parameters")
+
+1. Go back to the data flow designer and edit the data flow created above.
+1. Click on the parameters tab
+1. Create a new parameter and choose string array data type
+1. For the default value, enter ```['a','b','c']```
+1. Use the top ```movies1``` source to modify the column names to map to these array values
+1. Add a Select transformation. The Select transformation will be used to map incoming columns to new column names for output.
+1. We're going to change the first 3 column names to the new names defined in the parameter
+1. To do this, add 3 rule-based mapping entries in the bottom pane
+1. For the first column, the matching rule will be ```position==1``` and the name will be ```$parameter1[1]```
+1. Follow the same pattern for column 2 and 3
+
+ ![Select transformation](media/data-flow/dynacols-4.png "Select transformation")
+
+1. Click on the Inspect and Data Preview tabs of the Select transformation to view the new column name values ```(a,b,c)``` replace the original movie, title, genres column names
+
+### Create a cached lookup of external column mappings
+
+Next, we'll create a cached sink for a later lookup. The cache will read an external JSON configuration file that can be used to rename columns dynamically on each pipeline execution of your data flow.
+
+1. Go back to the data flow designer and edit the data flow created above. Add a Sink transformation to the ```columnmappings``` source.
+1. Set sink type to ```Cache```.
+1. Under Settings, choose ```prevcolumn``` as the key column.
+
+### Lookup columns names from cached sink
+
+Now that you've stored the configuration file contents in memory, you can dynamically map incoming column names to new outgoing column names.
+
+1. Go back to the data flow designer and edit the data flow create above. Click on the ```movies2``` source transformation.
+1. Add a Select transformation. This time, we'll use the Select transformation to rename column names based on the target name in the JSON configuration file that is being stored in the cached sink.
+1. Add a rule-based mapping. For the Matching Condition, use this formula: ```!isNull(cachedSink#lookup(name).prevcolumn)```.
+1. For the output column name, use this formula: ```cachedSink#lookup($$).newcolumn```.
+1. What we've done is to find all column names that match the ```prevcolumn``` property from the external JSON configuration file and renamed each match to the new ```newcolumn``` name.
+1. Click on the Data Preview and Inspect tabs in the Select transformation and you should now see the new column names from the external mapping file.
+
+![Source 2](media/data-flow/dynacols-2.png "Source 2")
+
+## Next steps
+
+* The completed pipeline from this tutorial can be downloaded from [here](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/DynaColsPipe.zip)
+* Learn more about [data flow sinks](data-flow-sink.md).
data-factory Data Factory How To Use Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-how-to-use-resource-manager-templates.md
Title: Use Resource Manager templates in Data Factory
+ Title: Use Resource Manager templates in Data Factory
description: Learn how to create and use Azure Resource Manager templates to create Data Factory entities.
Last updated 01/10/2018
# Use templates to create Azure Data Factory entities > [!NOTE]
-> This article applies to version 1 of Data Factory.
+> This article applies to version 1 of Data Factory.
## Overview While using Azure Data Factory for your data integration needs, you may find yourself reusing the same pattern across different environments or implementing the same task repetitively within the same solution. Templates help you implement and manage these scenarios in an easy manner. Templates in Azure Data Factory are ideal for scenarios that involve reusability and repetition.
Check out the following Azure quickstart templates on GitHub:
* [Create a Data factory to copy data from Azure Blob Storage to Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.datafactory/data-factory-blob-to-sql-copy) * [Create a Data factory with Hive activity on Azure HDInsight cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.datafactory/data-factory-hive-transformation) * [Create a Data factory to copy data from Salesforce to Azure Blobs](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.datafactory/data-factory-salesforce-to-blob-copy)
-* [Create a Data factory that chains activities: copies data from an FTP server to Azure Blobs, invokes a hive script on an on-demand HDInsight cluster to transform the data, and copies result into Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.databricks/data-factory-ftp-hive-blob)
+* [Create a Data factory that chains activities: copies data from an FTP server to Azure Blobs, invokes a hive script on an on-demand HDInsight cluster to transform the data, and copies result into Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.datafactory/data-factory-ftp-hive-blob)
Feel free to share your Azure Data Factory templates at [Azure quickstart](https://azure.microsoft.com/resources/templates/). Refer to the [contribution guide](https://github.com/Azure/azure-quickstart-templates/tree/master/1-CONTRIBUTION-GUIDE) while developing templates that can be shared via this repository.
If you need to pull secrets from [Azure Key Vault](../../key-vault/general/overv
> [!NOTE] > While exporting templates for existing data factories is currently not yet supported, it is in the works. >
->
+>
databox-online Azure Stack Edge Pro R Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-overview.md
Title: Microsoft Azure Stack Edge Pro R overview | Microsoft Docs
-description: Describes Azure Stack Edge Pro R devices, a storage solution for military applications that uses a physical device for network-based transfer into Azure.
+description: Describes Azure Stack Edge Pro R devices, a storage solution that uses a physical device for network-based transfer into Azure and the solution can deployed in harsh environments.
Azure Stack Edge service is a non-regional service. For more information, see [R
## Next steps - Review the [Azure Stack Edge Pro R system requirements](azure-stack-edge-gpu-system-requirements.md).
-<! Understand the [Azure Stack Edge Pro R limits](azure-stack-edge-limits.md).-->
+<! Understand the [Azure Stack Edge Pro R limits](azure-stack-edge-limits.md).-->
devtest-labs Devtest Lab Add Artifact Repo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-artifact-repo.md
We offer a [GitHub repository of artifacts](https://github.com/Azure/azure-devte
When you create a VM, you can save the Resource Manager template, customize it if you want, and then use it later to create more VMs. You must create your own private repository to store your custom Resource Manager templates. * To learn how to create a GitHub repository, see [GitHub Bootcamp](https://help.github.com/categories/bootcamp/).
-* To learn how to create an Azure DevOps Services project that has a Git repository, see [Connect to Azure DevOps Services](https://www.visualstudio.com/get-started/setup/connect-to-visual-studio-online).
+* To learn how to create an Azure DevOps Services project that has a Git repository, see [Connect to Azure DevOps Services](https://azure.microsoft.com/services/devops/).
The following figure is an example of how a repository that has artifacts might look in GitHub:
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
This section goes into more detail about **components** in DTDL models.
### Basic component example
-Here is a basic example of a component on a DTDL model. This example shows a Room model that makes use of a thermostat component.
+Here is a basic example of a component on a DTDL model. This example shows a Room model that makes use of a thermostat model as a component.
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/advanced-home-example/IRoom.json" highlight="15-19, 28-41":::
-> [!NOTE]
-> Note that the component interface (thermostat component) is defined in the same array as the interface that uses it (Room). Components must be defined this way in API calls in order for the interface to be found.
+If other models in this solution should also contain a thermostat, they can reference the same thermostat model as a component in their own definitions, just like Room does.
+
+> [!IMPORTANT]
+> The component interface (thermostat in the example above) must be defined in the same array as any interfaces that use it (Room in the example above) in order for the component reference to be found.
## Model inheritance
event-grid Event Schema Communication Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-communication-services.md
This section contains an example of what that data would look like for each even
"data": { "messageBody": "Welcome to Azure Communication Services", "messageId": "1613694358927",
+ "metadata": {
+ "key": "value",
+ "description": "A map of data associated with the message"
+ },
"senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724", "senderCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
This section contains an example of what that data would look like for each even
"editTime": "2021-02-19T00:28:20.784Z", "messageBody": "Let's Chat about new communication services.", "messageId": "1613694357917",
+ "metadata": {
+ "key": "value",
+ "description": "A map of data associated with the message"
+ },
"senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724", "senderCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
This section contains an example of what that data would look like for each even
"data": { "messageBody": "Talk about new Thread Events in commuication services", "messageId": "1613783230064",
+ "metadata": {
+ "key": "value",
+ "description": "A map of data associated with the message"
+ },
"type": "Text", "version": "1613783230064", "senderDisplayName": "Bob",
This section contains an example of what that data would look like for each even
"editTime": "2021-02-20T00:59:10.464+00:00", "messageBody": "8effb181-1eb2-4a58-9d03-ed48a461b19b", "messageId": "1613782685964",
+ "metadata": {
+ "key": "value",
+ "description": "A map of data associated with the message"
+ },
"type": "Text", "version": "1613782750464", "senderDisplayName": "Scott",
This section contains an example of what that data would look like for each even
* For an introduction to Azure Event Grid, see [What is Event Grid?](./overview.md) * For an introduction to Azure Event Grid Concepts, see [Concepts in Event Grid?](./concepts.md)
-* For an introduction to Azure Event Grid SystemTopics, see [System topics in Azure Event Grid?](./system-topics.md)
+* For an introduction to Azure Event Grid SystemTopics, see [System topics in Azure Event Grid?](./system-topics.md)
germany Germany Migration Main https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/germany/germany-migration-main.md
The two regions in Germany are entirely separate from global Azure. The clouds i
The guidance on identity / tenants is intended for Azure-only customers. If you use common Azure Active Directory (Azure AD) tenants for Azure and Microsoft 365 (or other Microsoft products), there are complexities in identity migration and you should first read the [Migration phases actions and impacts for the Migration from Microsoft Cloud Deutschland](/microsoft-365/enterprise/ms-cloud-germany-transition-phases?view=o365-worldwide). If you have questions, contact your account manager or Microsoft support.
+Azure Cloud Solution Providers need to take additional steps to support customers during and after the transition to the new German datacenter region. Learn more about the [additional steps](/microsoft-365/enterprise/ms-cloud-germany-transition-add-csp).
+ ## Migration process The process that you use to migrate a workload from Azure Germany to global Azure typically is similar to the process that's used to migrate applications to the cloud. The steps in the migration process are:
Learn about tools, techniques, and recommendations for migrating resources in th
- [Identity](./germany-migration-identity.md) - [Security](./germany-migration-security.md) - [Management tools](./germany-migration-management-tools.md)-- [Media](./germany-migration-media.md)
+- [Media](./germany-migration-media.md)
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
The ".x" text is symbolic to represent new minor versions of Linux distributions
|Microsoft|Windows Server|2012 - 2019| |Microsoft|Windows Client|Windows 10| |OpenLogic|CentOS|7.3 -8.x|
-|Red Hat|Red Hat Enterprise Linux|7.4 - 8.x|
+|Red Hat|Red Hat Enterprise Linux\*|7.4 - 8.x|
|SUSE|SLES|12 SP3-SP5, 15.x|
+\* Red Hat CoreOS isn't supported.
+ Custom virtual machine images are supported by Guest Configuration policy definitions as long as they're one of the operating systems in the table above.
hdinsight Apache Hbase Tutorial Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md
You can query data in HBase tables by using [Apache Hive](https://hive.apache.or
The Hive query to access HBase data need not be executed from the HBase cluster. Any cluster that comes with Hive (including Spark, Hadoop, HBase, or Interactive Query) can be used to query HBase data, provided the following steps are completed: 1. Both clusters must be attached to the same Virtual Network and Subnet
-2. Copy `/usr/hdp/$(hdp-select --version)/hbase/conf/hbase-site.xml` from the HBase cluster headnodes to the Hive cluster headnodes
+2. Copy `/usr/hdp/$(hdp-select --version)/hbase/conf/hbase-site.xml` from the HBase cluster headnodes to the Hive cluster headnodes and workernodes.
### Secure Clusters
HBase data can also be queried from Hive using ESP-enabled HBase:
1. When following a multi-cluster pattern, both clusters must be ESP-enabled. 2. To allow Hive to query HBase data, make sure that the `hive` user is granted permissions to access the HBase data via the Hbase Apache Ranger plugin
-3. When using separate, ESP-enabled clusters, the contents of `/etc/hosts` from the HBase cluster headnodes must be appended to `/etc/hosts` of the Hive cluster headnodes.
+3. When using separate, ESP-enabled clusters, the contents of `/etc/hosts` from the HBase cluster headnodes must be appended to `/etc/hosts` of the Hive cluster headnodes and workernodes.
> [!NOTE] > After scaling either clusters, `/etc/hosts` must be appended again
hdinsight Hdinsight Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-faq.md
- Title: Azure HDInsight frequently asked questions
-description: Frequently asked questions about HDInsight
-keywords: frequently asked questions, faq
----- Previously updated : 11/20/2019--
-# Azure HDInsight: Frequently asked questions
-
-This article provides answers to some of the most common questions about how to run [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/).
-
-## Creating or deleting HDInsight clusters
-
-### How do I provision an HDInsight cluster?
-
-To review the HDInsight clusters types, and the provisioning methods, see [Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more](./hdinsight-hadoop-provision-linux-clusters.md).
-
-### How do I delete an existing HDInsight cluster?
-
-To learn more about deleting a cluster when it's no longer in use, see [Delete an HDInsight cluster](hdinsight-delete-cluster.md).
-
-Try to leave at least 30 to 60 minutes between create and delete operations. Otherwise the operation may fail with the following error message:
-
-``Conflict (HTTP Status Code: 409) error when attempting to delete a cluster immediately after creation of a cluster. If you encounter this error, wait until the newly created cluster is in operational state before attempting to delete it.``
-
-### How do I select the correct number of cores or nodes for my workload?
-
-The appropriate number of cores and other configuration options depend on various factors.
-
-For more information, see [Capacity planning for HDInsight clusters](./hdinsight-capacity-planning.md).
-
-### What are the various types of nodes in an HDInsight cluster?
-
-See [Resource types in Azure HDInsight clusters](hdinsight-virtual-network-architecture.md#resource-types-in-azure-hdinsight-clusters).
-
-### What are the best practices for creating large HDInsight clusters?
-
-1. Recommend setting up HDInsight clusters with a [Custom Ambari DB](./hdinsight-custom-ambari-db.md) to improve the cluster scalability.
-2. Use [Azure Data Lake Storage Gen2](./hdinsight-hadoop-use-data-lake-storage-gen2.md) to create HDInsight clusters to take advantage of higher bandwidth and other performance characteristics of Azure Data Lake Storage Gen2.
-3. Headnodes should be sufficiently large to accommodate multiple master services running on these nodes.
-4. Some specific workloads such as Interactive Query will also need larger Zookeeper nodes. Please consider minimum of 8 core VMs.
-5. In the case of Hive and Spark, use [External Hive metastore](./hdinsight-use-external-metadata-stores.md).
-
-## Individual Components
-
-### Can I install additional components on my cluster?
-
-Yes. To install additional components or customize cluster configuration, use:
--- Scripts during or after creation. Scripts are invoked via [script action](./hdinsight-hadoop-customize-cluster-linux.md). Script action is a configuration option you can use from the Azure portal, HDInsight Windows PowerShell cmdlets, or the HDInsight .NET SDK. This configuration option can be used from the Azure portal, HDInsight Windows PowerShell cmdlets, or the HDInsight .NET SDK.--- [HDInsight Application Platform](https://azure.microsoft.com/services/hdinsight/partner-ecosystem/) to install applications.-
-For a list of supported components see [What are the Apache Hadoop components and versions available with HDInsight?](./hdinsight-component-versioning.md)
-
-### Can I upgrade the individual components that are pre-installed on the cluster?
-
-If you upgrade built-in components or applications that are pre-installed on your cluster, the resulting configuration won't be supported by Microsoft. These system configurations have not been tested by Microsoft. Try to use a different version of the HDInsight cluster that may already have the upgraded version of the component pre-installed.
-
-For example, upgrading Hive as an individual component isn't supported. HDInsight is a managed service, and many services are integrated with Ambari server and tested. Upgrading a Hive on its own causes the indexed binaries of other components to change, and will cause component integration issues on your cluster.
-
-### Can Spark and Kafka run on the same HDInsight cluster?
-
-No, it's not possible to run Apache Kafka and Apache Spark on the same HDInsight cluster. Create separate clusters for Kafka and Spark to avoid resource contention issues.
-
-### How do I change timezone in Ambari?
-
-1. Open the Ambari Web UI at `https://CLUSTERNAME.azurehdinsight.net`, where CLUSTERNAME is the name of your cluster.
-2. In the upper-right corner, select admin | Settings.
-
- :::image type="content" source="media/hdinsight-faq/ambari-settings.png" alt-text="Ambari Settings":::
-
-3. In the User Settings window, select the new timezone from the Timezone drop down, and then click Save.
-
- :::image type="content" source="media/hdinsight-faq/ambari-user-settings.png" alt-text="Ambari User Settings":::
-
-## Metastore
-
-### How can I migrate from the existing metastore to Azure SQL Database?
-
-To migrate from SQL Server to Azure SQL Database, see [Tutorial: Migrate SQL Server to a single database or pooled database in Azure SQL Database offline using DMS](../dms/tutorial-sql-server-to-azure-sql.md).
-
-### Is the Hive metastore deleted when the cluster is deleted?
-
-It depends on the type of metastore that your cluster is configured to use.
-
-For a default metastore: The default metastore is part of the cluster lifecycle. When you delete a cluster, the corresponding metastore and metadata are also deleted.
-
-For a custom metastore: The lifecycle of the metastore isn't tied to a cluster's lifecycle. So, you can create and delete clusters without losing metadata. Metadata such as your Hive schemas persists even after you delete and re-create the HDInsight cluster.
-
-For more information, see [Use external metadata stores in Azure HDInsight](hdinsight-use-external-metadata-stores.md).
-
-### Does migrating a Hive metastore also migrate the default policies of the Ranger database?
-
-No, the policy definition is in the Ranger database, so migrating the Ranger database will migrate its policy.
-
-### Can you migrate a Hive metastore from an Enterprise Security Package (ESP) cluster to a non-ESP cluster, and the other way around?
-
-Yes, you can migrate a Hive metastore from an ESP to a non-ESP cluster.
-
-### How can I estimate the size of a Hive metastore database?
-
-A Hive metastore is used to store the metadata for data sources that are used by the Hive server. The size requirements depend partly on the number and complexity of your Hive data sources. These items can't be estimated up front. As outlined in [Hive metastore guidelines](hdinsight-use-external-metadata-stores.md#hive-metastore-guidelines), you can start with a S2 tier. The tier provides 50 DTU and 250 GB of storage, and if you see a bottleneck, scale up the database.
-
-### Do you support any other database other than Azure SQL Database as an external metastore?
-
-No, Microsoft supports only Azure SQL Database as an external custom metastore.
-
-### Can I share a metastore across multiple clusters?
-
-Yes, you can share custom metastore across multiple clusters as long as they're using the same version of HDInsight.
-
-## Connectivity and virtual networks
-
-### What are the implications of blocking ports 22 and 23 on my network?
-
-If you block ports 22 and port 23, you won't have SSH access to the cluster. These ports aren't used by HDInsight service.
-
-For more information, see the following documents:
--- [Ports used by Apache Hadoop services on HDInsight](./hdinsight-hadoop-port-settings-for-services.md)--- [Secure incoming traffic to HDInsight clusters in a virtual network with private endpoint](https://azure.microsoft.com/blog/secure-incoming-traffic-to-hdinsight-clusters-in-a-vnet-with-private-endpoint/)--- [HDInsight management IP addresses](./hdinsight-management-ip-addresses.md)-
-### Can I deploy an additional virtual machine within the same subnet as an HDInsight cluster?
-
-Yes, you can deploy an additional virtual machine within the same subnet as an HDInsight cluster. The following configurations are possible:
--- Edge nodes: You can add another edge node to the cluster, as described in [Use empty edge nodes on Apache Hadoop clusters in HDInsight](hdinsight-apps-use-edge-node.md).--- Standalone nodes: You can add a standalone virtual machine to the same subnet and access the cluster from that virtual machine by using the private end point `https://<CLUSTERNAME>-int.azurehdinsight.net`. For more information, see [Control network traffic](./control-network-traffic.md).-
-### Should I store data on the local disk of an edge node?
-
-No, storing data on a local disk isn't a good idea. If the node fails, all data stored locally will be lost. We recommend storing data in Azure Data Lake Storage Gen2 or Azure Blob storage, or by mounting an Azure Files share for storing the data.
--
-### Can I add an existing HDInsight cluster to another virtual network?
-
-No, you can't. The virtual network should be specified at the time of provisioning. If no virtual network is specified during provisioning, the deployment creates an internal network that isn't accessible from outside. For more information, see [Add HDInsight to an existing virtual network](hdinsight-plan-virtual-network-deployment.md#existingvnet).
-
-## Security and Certificates
-
-### What are the recommendations for malware protection on Azure HDInsight clusters?
-
-For information on malware protection, see [Microsoft Antimalware for Azure Cloud Services and Virtual Machines](../security/fundamentals/antimalware.md).
-
-### How do I create a keytab for an HDInsight ESP cluster?
-
-Create a Kerberos keytab for your domain username. You can later use this keytab to authenticate to remote domain-joined clusters without entering a password. The domain name is uppercase:
-
-```shell
-ktutil
-ktutil: addent -password -p <username>@<DOMAIN.COM> -k 1 -e RC4-HMAC
-Password for <username>@<DOMAIN.COM>: <password>
-ktutil: wkt <username>.keytab
-ktutil: q
-```
-
-### Can I use an existing Azure Active Directory tenant to create an HDInsight cluster that has the ESP?
-
-Enable Azure Active Directory Domain Services (Azure AD DS) before you can create an HDInsight cluster with ESP. Open-source Hadoop relies on Kerberos for Authentication (as opposed to OAuth).
-
-To join VMs to a domain, you must have a domain controller. Azure AD DS is the managed domain controller, and is considered an extension of Azure Active Directory. Azure AD DS provides all the Kerberos requirements to build a secure Hadoop cluster in a managed way. HDInsight as a managed service integrates with Azure AD DS to provide security.
-
-### Can I use a self-signed certificate in an AAD-DS secure LDAP setup and provision an ESP cluster?
-
-Using a certificate issued by a certificate authority is recommended. But using a self-signed certificate is also supported on ESP. For more information, see:
--- [Enable Azure Active Directory Domain Services](domain-joined/apache-domain-joined-configure-using-azure-adds.md#enable-azure-ad-ds)--- [Tutorial: Configure secure LDAP for an Azure Active Directory Domain Services managed domain](../active-directory-domain-services/tutorial-configure-ldaps.md)-
-### How can I pull login activity shown in Ranger?
-
-For auditing requirements, Microsoft recommends enabling Azure Monitor logs as described in [Use Azure Monitor logs to monitor HDInsight clusters](./hdinsight-hadoop-oms-log-analytics-tutorial.md).
-
-### Can I disable `Clamscan` on my cluster?
-
-`Clamscan` is the antivirus software that runs on the HDInsight cluster and is used by Azure security (azsecd) to protect your clusters from virus attacks. Microsoft strongly recommends that users refrain from making any changes to the default `Clamscan` configuration.
-
-This process doesn't interfere with or take any cycles away from other processes. It will always yield to other process. CPU spikes from `Clamscan` should be seen only when the system is idle.
-
-In scenarios in which you must control the schedule, you can use the following steps:
-
-1. Disable automatic execution using the following command:
-
- sudo `usr/local/bin/azsecd config -s clamav -d Disabled`
- sudo service azsecd restart
-
-1. Add a Cron job that runs the following command as root:
-
- `/usr/local/bin/azsecd manual -s clamav`
-
-For more information about how to set up and run a cron job, see [How do I set up a Cron job](https://askubuntu.com/questions/2368/how-do-i-set-up-a-cron-job)?
-
-### Why is LLAP available on Spark ESP clusters?
-LLAP is enabled for security reasons (Apache Ranger), not performance. Use larger node VMs to accommodate for the resource usage of LLAP (for example, minimum D13V2).
-
-### How can I add additional AAD groups after creating an ESP cluster?
-There are two ways to achieve this goal:
-1- You can recreate the cluster and add the additional group at the time of cluster creation. If you're using scoped synchronization in AAD-DS, make sure group B is included in the scoped synchronization.
-2- Add the group as a nested sub group of the previous group that was used to create the ESP cluster. For example, if you've created an ESP cluster with group `A`, you can later on add group `B` as a nested subgroup of `A` and after approximately one hour it will be synced and available in the cluster automatically.
-
-## Storage
-
-### Can I add an Azure Data Lake Storage Gen2 to an existing HDInsight cluster as an additional storage account?
-
-No, it's currently not possible to add an Azure Data Lake Storage Gen2 storage account to a cluster that has blob storage as its primary storage. For more information, see [Compare storage options](hdinsight-hadoop-compare-storage-options.md).
-
-### How can I find the currently linked Service Principal for a Data Lake storage account?
-
-You can find your settings in **Data Lake Storage Gen1 access** under your cluster properties in the Azure portal. For more information, see [Verify cluster setup](../data-lake-store/data-lake-store-hdinsight-hadoop-use-portal.md#verify-cluster-set-up).
-
-### How can I calculate the usage of storage accounts and blob containers for my HDInsight clusters?
-
-Do one of the following actions:
--- [Use PowerShell](../storage/scripts/storage-blobs-container-calculate-size-powershell.md)--- Find the size of the */user/hive/.Trash/* folder on the HDInsight cluster, using the following command line:
-
- `hdfs dfs -du -h /user/hive/.Trash/`
-
-### How can I set up auditing for my blob storage account?
-
-To audit blob storage accounts, configure monitoring using the procedure at [Monitor a storage account in the Azure portal](../storage/common/manage-storage-analytics-logs.md). An HDFS-audit log provides only auditing information for the local HDFS filesystem only (hdfs://mycluster). It doesn't include operations that are done on remote storage.
-
-### How can I transfer files between a blob container and an HDInsight head node?
-
-Run a script similar to the following shell script on your head node:
-
-```shell
-for i in cat filenames.txt
-do
- hadoop fs -get $i <local destination>
-done
-```
-
-> [!NOTE]
-> The file *filenames.txt* will have the absolute path of the files in the blob containers.
-
-### Are there any Ranger plugins for storage?
-
-Currently, no Ranger plugin exists for blob storage and Azure Data Lake Storage Gen1 or Gen2. For ESP clusters, you should use Azure Data Lake Storage. You can at least set fine-grain permissions manually at the file system level using HDFS tools. Also, when using Azure Data Lake Storage, ESP clusters will do some of the file system access control using Azure Active Directory at the cluster level.
-
-You can assign data access policies to your users' security groups by using the Azure Storage Explorer. For more information, see:
--- [How do I set permissions for Azure AD users to query data in Data Lake Storage Gen2 by using Hive or other services?](hdinsight-hadoop-use-data-lake-storage-gen2.md#how-do-i-set-permissions-for-azure-ad-users-to-query-data-in-data-lake-storage-gen2-by-using-hive-or-other-services)-- [Set file and directory level permissions using Azure Storage Explorer with Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-explorer.md)-
-### Can I increase HDFS storage on a cluster without increasing the disk size of worker nodes?
-
-No. You can't increase the disk size of any worker node. So the only way to increase disk size is to drop the cluster and recreate it with larger worker VMs. Don't use HDFS for storing any of your HDInsight data, because the data is deleted if you delete your cluster. Instead, store your data in Azure. Scaling up the cluster can also add additional capacity to your HDInsight cluster.
-
-## Edge nodes
-
-### Can I add an edge node after the cluster has been created?
-
-See [Use empty edge nodes on Apache Hadoop clusters in HDInsight](hdinsight-apps-use-edge-node.md).
-
-### How can I connect to an edge node?
-
-After you create an edge node, you can connect to it by using SSH on port 22. You can find the name of the edge node from the cluster portal. The names usually end with *-ed*.
-
-### Why are persisted scripts not running automatically on newly created edge nodes?
-
-You use persisted scripts to customize new worker nodes added to the cluster through scaling operations. Persisted scripts don't apply to edge nodes.
-
-## REST API
-
-### What are the REST API calls to pull a Tez query view from the cluster?
-
-You can use the following REST endpoints to pull the necessary information in JSON format. Use basic authentication headers to make the requests.
--- `Tez Query View`: *https:\//\<cluster name>.azurehdinsight.net/ws/v1/timeline/HIVE_QUERY_ID/*-- `Tez Dag View`: *https:\//\<cluster name>.azurehdinsight.net/ws/v1/timeline/TEZ_DAG_ID/*-
-### How do I retrieve the configuration details from HDI cluster by using an Azure Active Directory user?
-
-To negotiate proper authentication tokens with your AAD user, go through the gateway by using the following format:
-
-* https://`<cluster dnsname>`.azurehdinsight.net/api/v1/clusters/testclusterdem/stack_versions/1/repository_versions/1
-
-### How do I use Ambari Restful API to monitor YARN performance?
-
-If you call the Curl command in the same virtual network or a peered virtual network, the command is:
-
-```curl
-curl -u <cluster login username> -sS -G
-http://<headnodehost>:8080/api/v1/clusters/<ClusterName>/services/YARN/components/NODEMANAGER?fields=metrics/cpu
-```
-
-If you call the command from outside the virtual network or from a non-peered virtual network, the command format is:
--- For a non-ESP cluster:
-
- ```curl
- curl -u <cluster login username> -sS -G
- https://<ClusterName>.azurehdinsight.net/api/v1/clusters/<ClusterName>/services/YARN/components/NODEMANAGER?fields=metrics/cpu
- ```
--- For an ESP cluster:
-
- ```curl
- curl -u <cluster login username>-sS -G
- https://<ClusterName>.azurehdinsight.net/api/v1/clusters/<ClusterName>/services/YARN/components/NODEMANAGER?fields=metrics/cpu
- ```
-
-> [!NOTE]
-> Curl will prompt you for a password. You must enter a valid password for the cluster login username.
-
-## Billing
-
-### How much does it cost to deploy an HDInsight cluster?
-
-For more information about pricing and FAQ related to billing, see the [Azure HDInsight Pricing](https://azure.microsoft.com/pricing/details/hdinsight/) page.
-
-### When does HDInsight billing start & stop?
-
-HDInsight cluster billing starts once a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute.
-
-### How do I cancel my subscription?
-
-For information about how to cancel your subscription, see [Cancel your Azure subscription](../cost-management-billing/manage/cancel-azure-subscription.md).
-
-### For pay-as-you-go subscriptions, what happens after I cancel my subscription?
-
-For information about your subscription after it's canceled, see
-[What happens after I cancel my subscription?](../cost-management-billing/manage/cancel-azure-subscription.md)
-
-## Hive
-
-### Why does the Hive version appear as 1.2.1000 instead of 2.1 in the Ambari UI even though I'm running an HDInsight 3.6 cluster?
-
-Although only 1.2 appears in the Ambari UI, HDInsight 3.6 contains both Hive 1.2 and Hive 2.1.
-
-## Other FAQ
-
-### What does HDInsight offer for real-time stream processing capabilities?
-
-For information about integration capabilities of stream processing, see [Choosing a stream processing technology in Azure](/azure/architecture/data-guide/technology-choices/stream-processing).
-
-### Is there a way to dynamically kill the head node of the cluster when the cluster is idle for a specific period?
-
-You can't do this action with HDInsight clusters. You can use Azure Data Factory for these scenarios.
-
-### What compliance offerings does HDInsight offer?
-
-For compliance information, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center) and the [Overview of Microsoft Azure compliance](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942).
hdinsight Hdinsight Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-upload-data.md
Microsoft provides the following utilities to work with Azure Storage:
| Tool | Linux | OS X | Windows | | |::|::|::|
-| [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) |✔ |✔ |✔ |
-| [Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md) |✔ |✔ |✔ |
-| [Azure PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md) | | |✔ |
-| [AzCopy](../storage/common/storage-use-azcopy-v10.md) |✔ | |✔ |
-| [Hadoop command](#hadoop-command-line) |✔ |✔ |✔ |
+| [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) |Γ£ö |Γ£ö |Γ£ö |
+| [Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md) |Γ£ö |Γ£ö |Γ£ö |
+| [Azure PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md) | | |Γ£ö |
+| [AzCopy](../storage/common/storage-use-azcopy-v10.md) |Γ£ö | |Γ£ö |
+| [Hadoop command](#hadoop-command-line) |Γ£ö |Γ£ö |Γ£ö |
> [!NOTE] > The Hadoop command is only available on the HDInsight cluster. The command only allows loading data from the local file system into Azure Storage.
There are also several applications that provide a graphical interface for worki
| Client | Linux | OS X | Windows | | |::|::|::|
-| [Microsoft Visual Studio Tools for HDInsight](hadoop/apache-hadoop-visual-studio-tools-get-started.md#explore-linked-resources) |✔ |✔ |✔ |
-| [Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md) |✔ |✔ |✔ |
-| [`Cerulea`](https://www.cerebrata.com/products/cerulean/features/azure-storage) | | |✔ |
-| [CloudXplorer](https://clumsyleaf.com/products/cloudxplorer) | | |✔ |
-| [CloudBerry Explorer for Microsoft Azure](https://www.cloudberrylab.com/free-microsoft-azure-explorer.aspx) | | |✔ |
-| [Cyberduck](https://cyberduck.io/) | |✔ |✔ |
+| [Microsoft Visual Studio Tools for HDInsight](hadoop/apache-hadoop-visual-studio-tools-get-started.md#explore-linked-resources) |Γ£ö |Γ£ö |Γ£ö |
+| [Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md) |Γ£ö |Γ£ö |Γ£ö |
+| [`Cerulea`](https://www.cerebrata.com/products/cerulean/features/azure-storage) | | |Γ£ö |
+| [CloudXplorer](https://clumsyleaf.com/products/cloudxplorer) | | |Γ£ö |
+| [CloudBerry Explorer for Microsoft Azure](https://www.cloudberrylab.com/free-microsoft-azure-explorer.aspx) | | |Γ£ö |
+| [Cyberduck](https://cyberduck.io/) | |Γ£ö |Γ£ö |
## Mount Azure Storage as Local Drive
iot-central Tutorial Add Edge As Leaf Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-add-edge-as-leaf-device.md
- Title: Tutorial - Add an Azure IoT Edge device to Azure IoT Central | Microsoft Docs
-description: Tutorial - Add an Azure IoT Edge device to your Azure IoT Central application
-- Previously updated : 05/29/2020------
-# Tutorial: Add an Azure IoT Edge device to your Azure IoT Central application
-
-This tutorial shows you how to configure and add an Azure IoT Edge device to your Azure IoT Central application. The tutorial uses an IoT Edge-enabled Linux virtual machine (VM) to simulate an IoT Edge device. The IoT Edge device uses a module that generates simulated environmental telemetry. You view the telemetry on a dashboard in your IoT Central application.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a device template for an IoT Edge device
-> * Create an IoT Edge device in IoT Central
-> * Deploy a simulated IoT Edge device to a Linux VM
-
-## Prerequisites
-
-To complete the steps in this tutorial, you need:
--
-Download the IoT Edge manifest file from GitHub. Right-click on the following link and then select **Save link as**: [EnvironmentalSensorManifest.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/iotedge/EnvironmentalSensorManifest.json)
-
-## Create device template
-
-In this section, you create an IoT Central device template for an IoT Edge device. You import an IoT Edge manifest to get started, and then modify the template to add telemetry definitions and views:
-
-### Import manifest to create template
-
-To create a device template from an IoT Edge manifest:
-
-1. In your IoT Central application, navigate to **Device templates** and select **+ New**.
-
-1. On the **Select template type** page, select the **Azure IoT Edge** tile. Then select **Next: Customize**.
-
-1. On the **Upload an Azure IoT Edge deployment manifest** page, enter *Environmental Sensor Edge Device* as the device template name. Then select **Browse** to upload the **EnvironmentalSensorManifest.json** you downloaded previously. Then select **Next: Review**.
-
-1. On the **Review** page, select **Create**.
-
-1. Select the **Manage** interface in the **SimulatedTemperatureSensor** module to view the two properties defined in the manifest:
--
-> [!TIP]
-> This deployment manifest pulls module images from an Azure Container Registry repository that doesn't require any credentials to connect. If you want to use module images from a private repository, set the container registry credentials in the manifest.
-
-### Add telemetry to manifest
-
-An IoT Edge manifest doesn't define the telemetry a module sends. You add the telemetry definitions to the device template in IoT Central. The **SimulatedTemperatureSensor** module sends telemetry messages that look like the following JSON:
-
-```json
-{
- "machine": {
- "temperature": 75.0,
- "pressure": 40.2
- },
- "ambient": {
- "temperature": 23.0,
- "humidity": 30.0
- },
- "timeCreated": ""
-}
-```
-
-To add the telemetry definitions to the device template:
-
-1. Select the **Manage** interface in the **Environmental Sensor Edge Device** template.
-
-1. Select **+ Add capability**. Enter *machine* as the **Display name** and make sure that the **Capability type** is **Telemetry**.
-
-1. Select **Object** as the schema type, and then select **Define**. On the object definition page, add *temperature* and *pressure* as attributes of type **Double** and then select **Apply**.
-
-1. Select **+ Add capability**. Enter *ambient* as the **Display name** and make sure that the **Capability type** is **Telemetry**.
-
-1. Select **Object** as the schema type, and then select **Define**. On the object definition page, add *temperature* and *humidity* as attributes of type **Double** and then select **Apply**.
-
-1. Select **+ Add capability**. Enter *timeCreated* as the **Display name** and make sure that the **Capability type** is **Telemetry**.
-
-1. Select **DateTime** as the schema type.
-
-1. Select **Save** to update the template.
-
-The **Manage** interface now includes the **machine**, **ambient**, and **timeCreated** telemetry types:
--
-### Add views to template
-
-The device template doesn't yet have a view that lets an operator see the telemetry from the IoT Edge device. To add a view to the device template:
-
-1. Select **Views** in the **Environmental Sensor Edge Device** template.
-
-1. On the **Select to add a new view** page, select the **Visualizing the device** tile.
-
-1. Change the view name to *View IoT Edge device telemetry*.
-
-1. Select the **ambient** and **machine** telemetry types. Then select **Add tile**.
-
-1. Select **Save** to save the **View IoT Edge device telemetry** view.
--
-### Publish the template
-
-Before you can add a device that uses the **Environmental Sensor Edge Device** template, you must publish the template.
-
-Navigate to the **Environmental Sensor Edge Device** template and select **Publish**. On the **Publish this device template to the application** panel, select **Publish** to publish the template:
--
-## Add IoT Edge device
-
-Now you've published the **Environmental Sensor Edge Device** template, you can add a device to your IoT Central application:
-
-1. In your IoT Central application, navigate to the **Devices** page and select **Environmental Sensor Edge Device** in the list of available templates.
-
-1. Select **+ New** to add a new device from the template. On the **Create new device** page, select **Create**.
-
-You now have a new device with the status **Registered**:
--
-### Get the device credentials
-
-When you deploy the IoT Edge device later in this tutorial, you need the credentials that allow the device to connect to your IoT Central application. The get the device credentials:
-
-1. On the **Device** page, select the device you created.
-
-1. Select **Connect**.
-
-1. On the **Device connection** page, make a note of the **ID Scope**, the **Device ID**, and the **Primary Key**. You use these values later.
-
-1. Select **Close**.
-
-You've now finished configuring your IoT Central application to enable an IoT Edge device to connect.
-
-## Deploy an IoT Edge device
-
-In this tutorial, you use an Azure IoT Edge-enabled Linux VM, created on Azure to simulate an IoT Edge device. To create the IoT Edge-enabled VM in your Azure subscription, click:
-
-[![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fmaster%2FedgeDeploy.json)
-
-On the **Custom deployment** page:
-
-1. Select your Azure subscription.
-
-1. Select **Create new** to create a new resource group called *central-edge-rg*.
-
-1. Choose a region close to you.
-
-1. Add a unique **DNS Label Prefix** such as *contoso-central-edge*.
-
-1. Choose an admin user name for the virtual machine.
-
-1. Enter *temp* as the connection string. Later, you configure the device to connect using DPS.
-
-1. Accept the default values for the VM size, Ubuntu version, and location.
-
-1. Select **password** as the authentication type.
-
-1. Enter a password for the VM.
-
-1. Then select **Review + Create**.
-
-1. Review your choices and then select **Create**:
-
- :::image type="content" source="media/tutorial-add-edge-as-leaf-device/vm-deployment.png" alt-text="Create an IoT Edge VM":::
-
-The deployment takes a couple of minutes to complete. When the deployment is complete, navigate to the **central-edge-rg** resource group in the Azure portal.
-
-### Configure the IoT Edge VM
-
-To configure IoT Edge in the VM to use DPS to register and connect to your IoT Central application:
-
-1. In the **contoso-edge-rg** resource group, select the virtual machine instance.
-
-1. In the **Support + troubleshooting** section, select **Serial console**. If you're prompted to configure boot diagnostics, follow the instructions in the portal.
-
-1. Press **Enter** to see the `login:` prompt. Enter your username and password to sign in.
-
-1. Run the following command to check the IoT Edge runtime version. At the time of writing, the version is 1.0.9.1:
-
- ```bash
- sudo iotedge --version
- ```
-
-1. Use the `nano` editor to open the IoT Edge config.yaml file:
-
- ```bash
- sudo nano /etc/iotedge/config.yaml
- ```
-
-1. Scroll down until you see `# Manual provisioning configuration`. Comment out the next three lines as shown in the following snippet:
-
- ```yaml
- # Manual provisioning configuration
- #provisioning:
- # source: "manual"
- # device_connection_string: "temp"
- ```
-
-1. Scroll down until you see `# DPS symmetric key provisioning configuration`. Uncomment the next eight lines as shown in the following snippet:
-
- ```yaml
- # DPS symmetric key provisioning configuration
- provisioning:
- source: "dps"
- global_endpoint: "https://global.azure-devices-provisioning.net"
- scope_id: "{scope_id}"
- attestation:
- method: "symmetric_key"
- registration_id: "{registration_id}"
- symmetric_key: "{symmetric_key}"
- ```
-
- > [!TIP]
- > Make sure there's no space left in front of `provisioning:`
-
-1. Replace `{scope_id}` with the **ID Scope** you made a note of previously.
-
-1. Replace `{registration_id}` with the **Device ID** you made a note of previously.
-
-1. Replace `{symmetric_key}` with the **Primary key** you made a note of previously.
-
-1. Save the changes (**Ctrl-O**) and exit (**Ctrl-X**) the `nano` editor.
-
-1. Run the following command to restart the IoT Edge daemon:
-
- ```bash
- sudo systemctl restart iotedge
- ```
-
-1. To check the status of the IoT Edge modules, run the following command:
-
- ```bash
- iotedge list
- ```
-
- The following sample output shows the running modules:
-
- ```bash
- NAME STATUS DESCRIPTION CONFIG
- SimulatedTemperatureSensor running Up 20 seconds mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0
- edgeAgent running Up 27 seconds mcr.microsoft.com/azureiotedge-agent:1.0
- edgeHub running Up 22 seconds mcr.microsoft.com/azureiotedge-hub:1.0
- ```
-
- > [!TIP]
- > You may need to wait for all the modules to start running.
-
-## View the telemetry
-
-The simulated IoT Edge device is now running in the VM. In your IoT Central application, the device status is now **Provisioned** on the **Devices** page:
--
-You can see the telemetry from the device on the **View IoT Edge device telemetry** page:
--
-The **Modules** page shows the status of the IoT Edge modules on the device:
--
-## Clean up resources
-
-If you plan to continue working with the IoT Edge VM, you can keep and reuse the resources you used in this tutorial. Otherwise, you can delete the resources you created in this tutorial to avoid additional charges:
-
-* To delete the IoT Edge VM and its associated resources, delete the the **contoso-edge-rg** resource group in the Azure portal.
-* To delete the IoT Central application, navigate to the **Your application** page in the **Administration** section of the application and select **Delete**.
-
-## Next steps
-
-Now that you've learned how to work with and manage IoT Edge devices in IoT Central, a suggested next step is to read:
-
-> [!div class="nextstepaction"]
-> [Develop IoT Edge modules](../../iot-edge/tutorial-develop-for-linux.md)
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
To connect the Microchip E54 to Azure, you'll modify a configuration file for Az
### Connect the device
-1. On the Microchip E54, locate the **Reset** button, the **Ethernet** port, and the Micro USB port, which is labeled **Debug USB**.
+1. On the Microchip E54, locate the **Reset** button, the **Ethernet** port, and the Micro USB port, which is labeled **Debug USB**. Each component is highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-microchip-atsame54-xpro/microchip-xpro-board.png" alt-text="Locate key components on the Microchip E54 evaluation kit board":::
1. Connect the Micro USB cable to the **Debug USB** port on the Microchip E54, and then connect it to your computer. > [!NOTE]
To connect the Microchip E54 to Azure, you'll modify a configuration file for Az
If you have the Weather Click sensor and the mikroBUS Xplained Pro adapter, follow the steps in this section; otherwise, skip to [Build the image](#build-the-image). You can complete this quickstart even if you don't have a sensor. The sample code for the device returns simulated data if a real sensor is not present.
-1. If you have the Weather Click sensor and the mikroBUS Xplained Pro adapter, install them on the Microchip E54.
+1. If you have the Weather Click sensor and the mikroBUS Xplained Pro adapter, install them on the Microchip E54 as shown in the following photo:
+
+ :::image type="content" source="media/quickstart-devkit-microchip-atsame54-xpro/sam-e54-sensor.png" alt-text="Install Weather Click sensor and mikroBUS Xplained Pro adapter on the Microchip ES4":::
+ 1. Reopen the configuration file you edited previously: *getting-started\Microchip\ATSAME54-XPRO\app\azure_config.h*
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
You'll complete the following tasks:
* [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases): Cross-platform utility to monitor and manage Azure IoT * Hardware
- * The [MXCHIP AZ3166 IoT DevKit](https://aka.ms/iot-devkit) (MXCHIP DevKit)
+ * The [MXCHIP AZ3166 IoT DevKit](https://www.seeedstudio.com/AZ3166-IOT-Developer-Kit.html) (MXCHIP DevKit)
* Wi-Fi 2.4 GHz * USB 2.0 A male to Micro USB male cable
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
You'll complete the following tasks:
* [Git](https://git-scm.com/downloads) for cloning the repository * Hardware
- * The [MXCHIP AZ3166 IoT DevKit](https://aka.ms/iot-devkit) (MXCHIP DevKit)
+ * The [MXCHIP AZ3166 IoT DevKit](https://www.seeedstudio.com/AZ3166-IOT-Developer-Kit.html) (MXCHIP DevKit)
* Wi-Fi 2.4 GHz * USB 2.0 A male to Micro USB male cable
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/about-iot-dps.md
DPS is available in many regions. The updated list of existing and newly announc
## Availability There is a 99.9% Service Level Agreement for DPS, and you can [read the SLA](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole.
-## Quotas
+## Quotas and Limits
Each Azure subscription has default quota limits in place that could impact the scope of your IoT solution. The current limit on a per-subscription basis is 10 Device Provisioning Services per subscription.
+For more details on quota limits, see [Azure Subscription Service Limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
+ [!INCLUDE [azure-iotdps-limits](../../includes/iot-dps-limits.md)]
-For more details on quota limits:
-* [Azure Subscription Service Limits](../azure-resource-manager/management/azure-subscription-service-limits.md)
## Related Azure components DPS automates device provisioning with Azure IoT Hub. Learn more about [IoT Hub](../iot-hub/index.yml).
iot-edge Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/gpu-acceleration.md
+
+ Title: GPU acceleration for Azure IoT Edge for Linux on Windows | Microsoft Docs
+description: Learn about how to configure your Azure IoT Edge for Linux on Windows virtual machines to use host device GPUs.
+++ Last updated : 06/22/2021+++
+monikerRange: "=iotedge-2018-06"
++
+# GPU acceleration for Azure IoT Edge for Linux on Windows (Preview)
+
+GPUs are a popular choice for artificial intelligence computations, because they offer parallel processing capabilities and can often execute vision-based inferencing faster than CPUs. To better support artificial intelligence and machine learning applications, Azure IoT Edge for Linux on Windows can expose a GPU to the virtual machine's Linux module.
+
+> [!NOTE]
+> The GPU acceleration features detailed below are in preview and are subject to change.
+
+Azure IoT Edge for Linux on Windows supports several GPU passthrough technologies, including:
+
+* **Direct Device Assignment (DDA)** - GPU cores are wholly dedicated to the Linux virtual machine.
+
+* **GPU-Paravirtualization (GPU-PV)** - The GPU is shared between the Linux VM and host.
+
+The Azure IoT Edge for Linux on Windows deployment will automatically select the appropriate passthrough method to match the supported capabilities of your device's GPU hardware.
+
+> [!IMPORTANT]
+> These features may include components developed and owned by NVIDIA Corporation or its licensors. The use of the components is governed by the NVIDIA End-User License Agreement located [on NVIDIA's website](https://www.nvidia.com/content/DriverDownload-March2009/licence.php?lang=us).
+>
+> By using GPU acceleration features, you are accepting and agreeing to the terms of the NVIDIA End-User License Agreement.
+
+## Prerequisites
+
+The GPU acceleration features of Azure IoT Edge for Linux on Windows currently supports a select set of GPU hardware. Additionally, use of this feature may require the latest Windows Insider Dev Channel build, depending on your configuration.
+
+The supported GPUs and required Windows versions are listed below:
+
+* NVIDIA T4 (supports DDA)
+
+ * Windows Server, build 17763 or higher
+ * Windows Enterprise or Professional, build 21318 or higher (Windows Insider build)
+
+* NVIDIA GeForce/Quadro (supports GPU-PV)
+
+ * Windows Enterprise or Professional, build 20145 or higher (Windows Insider build)
+
+### Windows Insider builds
+
+For Windows Enterprise or Professional users, you will need to [register for the Windows Insider Program](https://insider.windows.com/getting-started#register).
+
+Once you register, follow the instructions on the **2. Flight** tab to get access to the appropriate Windows Insider build. When selecting the channel you wish to use, select the [dev channel](/windows-insider/flight-hub/#active-development-builds-of-windows-10). After installation, you can verify your build version number by running `winver` via command prompt.
+
+### T4 GPUs
+
+For **T4 GPUs**, Microsoft recommends a device mitigation driver from your GPU's vendor. For more information, see [Deploy graphics devices using direct device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda#optionalinstall-the-partitioning-driver).
+
+> [!WARNING]
+> Enabling hardware device passthrough may increase security risks.
+
+### GeForce/Quadro GPUs
+
+For **GeForce/Quadro GPUs**, download and install the [NVIDIA CUDA-enabled driver for Windows Subsystem for Linux (WSL)](https://developer.nvidia.com/cuda/wsl) to use with your existing CUDA ML workflows. Originally developed for WSL, the CUDA for WSL drivers are also used for Azure IoT Edge for Linux on Windows.
+
+## Using GPU acceleration for your Linux on Windows deployment
+
+Now you are ready to deploy and run GPU-accelerated Linux modules in your Windows environment through Azure IoT Edge for Linux on Windows. More details on the deployment process can be found in [Install Azure IoT Edge for Linux on Windows](how-to-install-iot-edge-on-windows.md).
+
+## Next steps
+
+* [Create your deployment of Azure IoT Edge for Linux on Windows](how-to-install-iot-edge-on-windows.md)
+
+* Learn more about GPU passthrough technologies by visiting the [DDA documentation](/windows-server/virtualization/hyper-v/plan/plan-for-gpu-acceleration-in-windows-server#discrete-device-assignment-dda).
iot-edge How To Auto Provision Symmetric Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-symmetric-keys.md
Jsm0lyGpjaVYVP2g3FnmnmG9dI/9qU24wNoykUmermc=
The IoT Edge runtime is deployed on all IoT Edge devices. Its components run in containers, and allow you to deploy additional containers to the device so that you can run code at the edge.
+<!-- 1.1 -->
+
+Follow the appropriate steps to install Azure IoT Edge based on your operating system:
+
+* [Install IoT Edge for Linux](how-to-install-iot-edge.md)
+* [Install IoT Edge for Linux on Windows devices](how-to-install-iot-edge-on-windows.md)
+ * This scenario is the recommended way to run IoT Edge on Windows devices.
+* [Install IoT Edge with Windows containers](how-to-install-iot-edge-windows-on-windows.md)
+
+Once IoT Edge is installed on your device, return to this article to provision the device.
+
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+ Follow the steps in [Install the Azure IoT Edge runtime](how-to-install-iot-edge.md), then return to this article to provision the device.
+<!-- end 1.2 -->
+ ## Configure the device with provisioning information Once the runtime is installed on your device, configure the device with the information it uses to connect to the Device Provisioning Service and IoT Hub.
Have the following information ready:
> [!TIP] > For group enrollments, you need each device's [derived key](#derive-a-device-key) rather than the DPS enrollment primary key.
-### Linux device
+# [Linux](#tab/linux)
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
Have the following information ready:
:::moniker-end <!-- end 1.2 -->
-### Windows device
+# [Linux on Windows](#tab/eflow)
+
+<!-- 1.1 -->
+
+You can use either PowerShell or Windows Admin Center to provision your IoT Edge device.
+
+### PowerShell
+
+For PowerShell, run the following command with the placeholder values updated with your own values:
+
+```powershell
+Provision-EflowVm -provisioningType DpsSymmetricKey -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <REGISTRATION_ID_HERE> -symmKey <PRIMARY_KEY_HERE>
+```
+
+### Windows Admin Center
+
+For Windows Admin Center, use the following steps:
+
+1. On the **Azure IoT Edge device provisioning** pane, select **Symmetric Key (DPS)** from the provisioning method dropdown.
+
+1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your DPS instance.
+
+1. On the **Overview** tab, copy the **ID Scope** value. Paste it into the scope ID field in the Windows Admin Center.
+
+1. On the **Manage enrollments** tab in the Azure portal, select the enrollment you created. Copy the **Primary Key** value in the enrollment details. Paste it into the symmetric key field in the Windows Admin Center.
+
+1. Provide the registration ID of your device in the registration ID field in the Windows Admin Center.
+
+1. Choose **Provisioning with the selected method**.
+
+ ![Choose provisioning with the selected method after filling in the required fields for symmetric key provisioning](./media/how-to-install-iot-edge-on-windows/provisioning-with-selected-method-symmetric-key.png)
+
+1. Once the provisioning is complete, select **Finish**. You will be taken back to the main dashboard. Now, you should see a new device listed, whose type is `IoT Edge Devices`. You can select the IoT Edge device to connect to it. Once on its **Overview** page, you can view the **IoT Edge Module List** and **IoT Edge Status** of your device.
+
+<!-- end 1.1. -->
+
+<!-- 1.2 -->
+
+>[!NOTE]
+>Currently, there is not support for IoT Edge version 1.2 running on IoT Edge for Linux for Windows.
+
+<!-- end 1.2 -->
+
+# [Windows](#tab/windows)
+
+<!-- 1.1 -->
1. Open a PowerShell window in administrator mode. Be sure to use an AMD64 session of PowerShell when installing IoT Edge, not PowerShell (x86).
Have the following information ready:
Initialize-IoTEdge -DpsSymmetricKey -ScopeId {scope ID} -RegistrationId {registration ID} -SymmetricKey {symmetric key} ```
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+>[!NOTE]
+>Currently, there is not support for IoT Edge version 1.2 running on Windows.
+
+<!-- end 1.2 -->
+++ ## Verify successful installation
-If the runtime started successfully, you can go into your IoT Hub and start deploying IoT Edge modules to your device. Use the following commands on your device to verify that the runtime installed and started successfully.
+If the runtime started successfully, you can go into your IoT Hub and start deploying IoT Edge modules to your device.
+
+You can verify that the individual enrollment that you created in Device Provisioning Service was used. Navigate to your Device Provisioning Service instance in the Azure portal. Open the enrollment details for the individual enrollment that you created. Notice that the status of the enrollment is **assigned** and the device ID is listed.
+
+Use the following commands on your device to verify that the IoT Edge installed and started successfully.
-### Linux device
+# [Linux](#tab/linux)
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
sudo iotedge list
:::moniker-end
-### Windows device
+# [Linux on Windows](#tab/eflow)
+
+<!-- 1.1 -->
+
+Connect to the IoT Edge for Linux on Windows virtual machine.
+
+```powershell
+Connect-EflowVM
+```
+
+Check the status of the IoT Edge service.
+
+```cmd/sh
+sudo systemctl status iotedge
+```
+
+Examine service logs.
+
+```cmd/sh
+sudo journalctl -u iotedge --no-pager --no-full
+```
+
+List running modules.
+
+```cmd/sh
+sudo iotedge list
+```
++
+<!-- 1.2 -->
+
+>[!NOTE]
+>Currently, there is not support for IoT Edge version 1.2 running on IoT Edge for Linux for Windows.
+
+<!-- end 1.2 -->
+
+# [Windows](#tab/windows)
+
+<!-- 1.1 -->
Check the status of the IoT Edge service.
List running modules.
iotedge list ```
-You can verify that the individual enrollment that you created in Device Provisioning Service was used. Navigate to your Device Provisioning Service instance in the Azure portal. Open the enrollment details for the individual enrollment that you created. Notice that the status of the enrollment is **assigned** and the device ID is listed.
+
+<!-- 1.2 -->
+
+>[!NOTE]
+>Currently, there is not support for IoT Edge version 1.2 running on Windows.
+
+<!-- end 1.2 -->
++ ## Next steps
iot-edge How To Auto Provision Tpm Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-tpm-linux-on-windows.md
+
+ Title: Auto-provision Windows devices with DPS and TPM - Azure IoT Edge | Microsoft Docs
+description: Use automatic device provisioning for IoT Edge for Linux on Windows with Device Provisioning Service and TPM attestation
++++ Last updated : 06/18/2021+++
+monikerRange: "=iotedge-2018-06"
++
+# Create and provision an IoT Edge for Linux on Windows device with TPM attestation
++
+Azure IoT Edge devices can be provisioned using the [Device Provisioning Service](../iot-dps/index.yml) just like devices that are not edge-enabled. If you're unfamiliar with the process of auto-provisioning, review the [provisioning](../iot-dps/about-iot-dps.md#provisioning-process) overview before continuing.
+
+DPS supports Trusted Platform Module (TPM) attestation for IoT Edge devices only for individual enrollment, not group enrollment.
+
+This article shows you how to use auto-provisioning on a device running IoT Edge for Linux on Windows with the following steps:
+
+* Retrieve the TPM information from your device.
+* Create an individual enrollment for the device.
+* Install IoT Edge for Linux on Windows and connect the device to IoT Hub.
+
+>[!TIP]
+>This article uses programs that simulate a TPM on the device to test this scenario, but much of it applies when using physical TPM hardware as well.
+
+## Prerequisites
+
+* A Windows device. For supported Windows versions, see [Operating systems](support.md#operating-systems).
+* An active IoT hub.
+* An instance of the IoT Hub Device Provisioning Service in Azure, linked to your IoT hub.
+ * If you don't have a Device Provisioning Service instance, follow the instructions in [Set up the IoT Hub DPS](../iot-dps/quick-setup-auto-provision.md).
+ * After you have the Device Provisioning Service running, copy the value of **ID Scope** from the overview page. You use this value when you configure the IoT Edge runtime.
+
+> [!NOTE]
+> TPM 2.0 is required when using TPM attestation with DPS and can only be used to create individual, not group, enrollments.
+
+## Simulate a TPM for your device
+
+To provision your device, you need to gather information from your TPM chip and provide it to your instance of the Device Provisioning Service (DPS) so that the service can recognize your device when it tries to connect.
+
+First, you need to determine the **Endorsement key**, which is unique to each TPM chip and is obtained from the TPM chip manufacturer associated with it. Then, you need to provide a **Registration ID** for your device. You can derive a unique registration ID for your TPM device by, for example, creating an SHA-256 hash of the endorsement key.
+
+DPS provides samples that simulate a TPM and return the endorsement key and registration ID for you.
+
+1. Choose one of the samples from the following list, based on your preferred language.
+1. Keep the window hosting the simulated TPM running until you're completely finished testing this scenario.
+1. When you create the DPS enrollment for your device, make sure you select **True** to declare that this enrollment is for an **IoT Edge device**.
+1. Stop following the DPS sample steps once you save your individual enrollment, then return to this article to set up IoT Edge for Linux on Windows.
+
+Simulated TPM samples:
+
+* [C](../iot-dps/quick-create-simulated-device.md)
+* [Java](../iot-dps/quick-create-simulated-device-tpm-java.md)
+* [C#](../iot-dps/quick-create-simulated-device-tpm-csharp.md)
+* [Node.js](../iot-dps/quick-create-simulated-device-tpm-node.md)
+* [Python](../iot-dps/quick-create-simulated-device-tpm-python.md)
+
+## Install IoT Edge for Linux on Windows
+
+The installation steps in this section are abridged to highlight the steps specific to the TPM provisioning scenario. For more detailed instructions, including prerequisites and remote installation steps, see [Install and provision Azure IoT Edge for Linux on a Windows device](how-to-install-iot-edge-on-windows.md).
+
+# [PowerShell](#tab/powershell)
+
+1. Open an elevated PowerShell session on the Windows device.
+
+1. Download IoT Edge for Linux on Windows.
+
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ Invoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath
+ ```
+
+1. Install IoT Edge for Linux on Windows on your device.
+
+ ```powershell
+ Start-Process -Wait msiexec -ArgumentList "/i","$([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))","/qn"
+ ```
+
+1. For the deployment to run successfully, you need to set the execution policy on the device to `AllSigned` if it is not already.
+
+ 1. Check the current execution policy.
+
+ ```powershell
+ Get-ExecutionPolicy -List
+ ```
+
+ 1. If the execution policy of `local machine` is not `AllSigned`, update the execution policy.
+
+ ```powershell
+ Set-ExecutionPolicy -ExecutionPolicy AllSigned -Force
+ ```
+
+1. Deploy IoT Edge for Linux on Windows.
+
+ ```powershell
+ Deploy-Eflow
+ ```
+
+1. Enter `Y` to accept the license terms.
+
+1. Enter `O` or `R` to toggle **Optional diagnostic data** on or off, depending on your preference.
+
+1. The output will report **Deployment successful** once IoT Edge for Linux on Windows has been successfully deployed to your device.
+
+1. Provision your device using the **Scope ID** that you collected from your instance of Device Provisioning Service.
+
+ ```powershell
+ Provision-EflowVM -provisioningType "DpsTpm" -scopeId "<scope id>"
+ ```
+
+# [Windows Admin Center](#tab/windowsadmincenter)
+
+>[!NOTE]
+>The Azure IoT Edge extension for Windows Admin Center is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Installation and management processes may be different than for generally available features.
+
+1. Have Windows Admin Center configured with the **Azure IoT Edge** extension.
+
+1. On the Windows Admin Center connections page, select **Add**.
+
+1. On the **Add or create resources** pane, located the **Azure IoT Edge** tile. Select **Create new** to install a new instance of Azure IoT Edge for Linux on Windows on a device.
+
+1. Follow the steps in the deployment wizard to install and configure IoT Edge for Linux on Windows.
+
+ 1. On the **Getting Started** steps, review the prerequisites, accept the license terms, and choose whether or not to send diagnostic data.
+
+ 1. On the **Deploy** steps, choose your device and its configuration settings. Then observe the progress as IoT Edge is deployed to your device.
+
+ 1. Select **Next** to continue to the **Connect** step, where you provide the provisioning information for your device.
+++
+## Configure the device with provisioning information
+
+# [PowerShell](#tab/powershell)
+
+1. Open an elevated PowerShell session on the Windows device.
+
+1. Provision your device using the **Scope ID** that you collected from your instance of Device Provisioning Service.
+
+ ```powershell
+ Provision-EflowVM -provisioningType "DpsTpm" -scopeId "<scope id>"
+ ```
+
+# [Windows Admin Center](#tab/windowsadmincenter)
+
+1. On the **Connect** step, provision your device.
+
+ 1. Select the **DpsTpm** provisioning method.
+ 1. Provide the **Scope ID** that you retrieve from your instance of the Device Provisioning Service.
+
+ ![Provision your device with DPS and TPM attestation.](./media/how-to-auto-provision-tpm-linux-on-windows/tpm-provision.png)
+
+1. Select **Provisioning with the selected method**.
+
+1. Once IoT Edge has successfully been installed and provisioned on your device, select **Finish** to exit the deployment wizard.
+++
+## Verify successful configuration
+
+If the runtime started successfully, you can go into your IoT Hub and start deploying IoT Edge modules to your device.
+
+You can verify that the individual enrollment that you created in Device Provisioning Service was used. Navigate to your Device Provisioning Service instance in the Azure portal. Open the enrollment details for the individual enrollment that you created. Notice that the status of the enrollment is **assigned** and the device ID is listed.
+
+Use the following commands on your device to verify that the IoT Edge installed and started successfully.
+
+Connect to the IoT Edge for Linux on Windows virtual machine.
+
+```powershell
+Connect-EflowVM
+```
+
+Check the status of the IoT Edge service.
+
+```cmd/sh
+sudo systemctl status iotedge
+```
+
+Examine service logs.
+
+```cmd/sh
+sudo journalctl -u iotedge --no-pager --no-full
+```
+
+List running modules.
+
+```cmd/sh
+sudo iotedge list
+```
+
+## Next steps
+
+The Device Provisioning Service enrollment process lets you set the device ID and device twin tags at the same time as you provision the new device. You can use those values to target individual devices or groups of devices using automatic device management. Learn how to [Deploy and monitor IoT Edge modules at scale using the Azure portal](how-to-deploy-at-scale.md) or [using Azure CLI](how-to-deploy-cli-at-scale.md)
iot-edge How To Auto Provision X509 Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-x509-certs.md
description: Use X.509 certificates to test automatic device provisioning for Az
- Previously updated : 03/01/2021 Last updated : 06/18/2021
Now that an enrollment exists for this device, the IoT Edge runtime can automati
The IoT Edge runtime is deployed on all IoT Edge devices. Its components run in containers, and allow you to deploy additional containers to the device so that you can run code at the edge.
+<!-- 1.1 -->
+
+Follow the appropriate steps to install Azure IoT Edge based on your operating system:
+
+* [Install IoT Edge for Linux](how-to-install-iot-edge.md)
+* [Install IoT Edge for Linux on Windows devices](how-to-install-iot-edge-on-windows.md)
+ * This scenario is the recommended way to run IoT Edge on Windows devices.
+* [Install IoT Edge with Windows containers](how-to-install-iot-edge-windows-on-windows.md)
+
+Once IoT Edge is installed on your device, return to this article to provision the device.
+
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+ Follow the steps in [Install the Azure IoT Edge runtime](how-to-install-iot-edge.md), then return to this article to provision the device.
+<!-- end 1.2 -->
+ X.509 provisioning with DPS is only supported in IoT Edge version 1.0.9 or newer. ## Configure the device with provisioning information
Have the following information ready:
* The device identity certificate chain file on the device. * The device identity key file on the device.
-### Linux device
+# [Linux](#tab/linux)
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
Have the following information ready:
:::moniker-end <!-- end 1.2 -->
-### Windows device
+# [Linux on Windows](#tab/eflow)
+
+<!-- 1.1 -->
+
+You can use either PowerShell or Windows Admin Center to provision your IoT Edge device.
+
+### PowerShell
+
+For PowerShell, run the following command with the placeholder values updated with your own values:
+
+```powershell
+Provision-EflowVm -provisioningType DPSx509 -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <REGISTRATION_ID_HERE> -identityCertLocWin <ABSOLUTE_CERT_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityPkLocWin <ABSOLUTE_PRIVATE_KEY_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityCertLocVm <ABSOLUTE_CERT_DEST_PATH_ON_LINUX_MACHINE -identityPkLocVm <ABSOLUTE_PRIVATE_KEY_DEST_PATH_ON_LINUX_MACHINE>
+```
+
+### Windows Admin Center
+
+For Windows Admin Center, use the following steps:
+
+1. On the **Azure IoT Edge device provisioning** pane, select **X.509 Certificate (DPS)** from the provisioning method dropdown.
+
+1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your DPS instance.
+
+1. On the **Overview** tab, copy the **ID Scope** value. Paste it into the scope ID field in the Windows Admin Center.
+
+1. Provide the registration ID of your device in the registration ID field in the Windows Admin Center.
+
+1. Upload your certificate and private key files.
+
+1. Choose **Provisioning with the selected method**.
+
+ ![Choose provisioning with the selected method after filling in the required fields for X.509 certificate provisioning](./media/how-to-install-iot-edge-on-windows/provisioning-with-selected-method-x509-certs.png)
+
+1. Once the provisioning is complete, select **Finish**. You will be taken back to the main dashboard. Now, you should see a new device listed, whose type is `IoT Edge Devices`. You can select the IoT Edge device to connect to it. Once on its **Overview** page, you can view the **IoT Edge Module List** and **IoT Edge Status** of your device.
+
+<!-- end 1.1. -->
+
+<!-- 1.2 -->
+
+>[!NOTE]
+>Currently, there is not support for IoT Edge version 1.2 running on IoT Edge for Linux for Windows.
+
+<!-- end 1.2 -->
+
+# [Windows](#tab/windows)
+
+<!-- 1.1 -->
1. Open a PowerShell window in administrator mode. Be sure to use an AMD64 session of PowerShell when installing IoT Edge, not PowerShell (x86).
Have the following information ready:
>[!TIP] >The config file stores your certificate and key information as file URIs. However, the Initialize-IoTEdge command handles this formatting step for you, so you can provide the absolute path to the certificate and key files on your device.
+<!-- end 1.1. -->
+
+<!-- 1.2 -->
+
+>[!NOTE]
+>Currently, there is not support for IoT Edge version 1.2 running on Windows.
+
+<!-- end 1.2 -->
+++ ## Verify successful installation If the runtime started successfully, you can go into your IoT Hub and start deploying IoT Edge modules to your device. You can verify that the individual enrollment that you created in Device Provisioning Service was used. Navigate to your Device Provisioning Service instance in the Azure portal. Open the enrollment details for the individual enrollment that you created. Notice that the status of the enrollment is **assigned** and the device ID is listed.
-Use the following commands on your device to verify that the runtime installed and started successfully.
+Use the following commands on your device to verify that the IoT Edge installed and started successfully.
-### Linux device
+# [Linux](#tab/linux)
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
List running modules.
```cmd/sh iotedge list ```+ :::moniker-end <!-- 1.2 -->
List running modules.
```cmd/sh sudo iotedge list ```+ :::moniker-end
-### Windows device
+# [Linux on Windows](#tab/eflow)
+
+<!-- 1.1 -->
+
+Connect to the IoT Edge for Linux on Windows virtual machine.
+
+```powershell
+Connect-EflowVM
+```
+
+Check the status of the IoT Edge service.
+
+```cmd/sh
+sudo systemctl status iotedge
+```
+
+Examine service logs.
+
+```cmd/sh
+sudo journalctl -u iotedge --no-pager --no-full
+```
+
+List running modules.
+
+```cmd/sh
+sudo iotedge list
+```
++
+<!-- 1.2 -->
+
+>[!NOTE]
+>Currently, there is not support for IoT Edge version 1.2 running on IoT Edge for Linux for Windows.
+
+<!-- end 1.2 -->
+
+# [Windows](#tab/windows)
+
+<!-- 1.1 -->
Check the status of the IoT Edge service.
List running modules.
iotedge list ``` +
+<!-- 1.2 -->
+
+>[!NOTE]
+>Currently, there is not support for IoT Edge version 1.2 running on Windows.
+
+<!-- end 1.2 -->
+++ ## Next steps The Device Provisioning Service enrollment process lets you set the device ID and device twin tags at the same time as you provision the new device. You can use those values to target individual devices or groups of devices using automatic device management. Learn how to [Deploy and monitor IoT Edge modules at scale using the Azure portal](how-to-deploy-at-scale.md) or [using Azure CLI](how-to-deploy-cli-at-scale.md).
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-configure-proxy-support.md
systemctl show --property=Environment aziot-identityd
Log in to your IoT Edge for Linux on Windows virtual machine:
-```azurepowershell-interactive
-Ssh-EflowVm
+```powershell
+Connect-EflowVm
``` Follow the same steps as the Linux section above to configure the IoT Edge daemon.
If you included the **UpstreamProtocol** environment variable in the confige.yam
If the proxy you're attempting to use performs traffic inspection on TLS-secured connections, it's important to note that authentication with X.509 certificates doesn't work. IoT Edge establishes a TLS channel that's encrypted end to end with the provided certificate and key. If that channel is broken for traffic inspection, the proxy can't reestablish the channel with the proper credentials, and IoT Hub and the IoT Hub device provisioning service return an `Unauthorized` error.
-To use a proxy that performs traffic inspection, you must use either shared access signature authentication or have IoT Hub and the IoT Hub device provisioning service added to an allow list to avoid inspection.
+To use a proxy that performs traffic inspection, you must use either shared access signature authentication or have IoT Hub and the IoT Hub device provisioning service added to an allowlist to avoid inspection.
## Next steps
iot-edge How To Install Iot Edge On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md
Title: Install Azure IoT Edge for Linux on Windows | Microsoft Docs
description: Azure IoT Edge installation instructions on Windows devices -+ Previously updated : 01/20/2021 Last updated : 06/10/2021 monikerRange: "=iotedge-2018-06"
-# Install and provision Azure IoT Edge for Linux on a Windows device (Preview)
+# Install and provision Azure IoT Edge for Linux on a Windows device
[!INCLUDE [iot-edge-version-201806](../../includes/iot-edge-version-201806.md)] The Azure IoT Edge runtime is what turns a device into an IoT Edge device. The runtime can be deployed on devices from PC class to industrial servers. Once a device is configured with the IoT Edge runtime, you can start deploying business logic to it from the cloud. To learn more, see [Understand the Azure IoT Edge runtime and its architecture](iot-edge-runtime.md).
-Azure IoT Edge for Linux on Windows allows you to use Azure IoT Edge on Windows devices by using Linux virtual machines. The Linux version of Azure IoT Edge and any Linux modules deployed with it run on the virtual machine. From there, Windows applications and code and the IoT Edge runtime and modules can freely interact with each other.
+Azure IoT Edge for Linux on Windows allows you to install IoT Edge on Linux virtual machines that run on Windows devices. The Linux version of Azure IoT Edge and any Linux modules deployed with it run on the virtual machine. From there, Windows applications and code and the IoT Edge runtime and modules can freely interact with each other.
This article lists the steps to set up IoT Edge on a Windows device. These steps deploy a Linux virtual machine that contains the IoT Edge runtime to run on your Windows device, then provision the device with its IoT Hub device identity. >[!NOTE]
->IoT Edge for Linux on Windows is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
->While IoT Edge for Linux on Windows is the recommended experience for using Azure IoT Edge in a Windows environment, Windows containers are still available. If you prefer to use Windows containers, see the how-to guide on [installing and managing Azure IoT Edge for Windows](how-to-install-iot-edge-windows-on-windows.md).
+>IoT Edge for Linux on Windows is the recommended experience for using Azure IoT Edge in a Windows environment. However, Windows containers are still available. If you prefer to use Windows containers, see [Install and manage Azure IoT Edge with Windows containers](how-to-install-iot-edge-windows-on-windows.md).
## Prerequisites
This article lists the steps to set up IoT Edge on a Windows device. These steps
* Professional, Enterprise, or Server editions * Minimum Free Memory: 1 GB * Minimum Free Disk Space: 10 GB
- * If you're creating a new deployment using Windows 10, make sure you enable Hyper-V. For more information, see how to [Install Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v).
- * If you're creating a new deployment using Windows Server, make sure you install Hyper-V role and have a network switch. For more information, see [Nested virtualization for Azure IoT Edge for Linux on Windows](nested-virtualization.md).
- * If you're creating a new deployment using a VM, make sure you configure nested virtualization correctly. For more information, see the [nested virtualization](nested-virtualization.md) guide.
+ * Virtualization support
+ * On Windows 10, enable Hyper-V. For more information, see [Install Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v).
+ * On Windows Server, install the Hyper-V role and create a default network switch. For more information, see [Nested virtualization for Azure IoT Edge for Linux on Windows](nested-virtualization.md).
+ * On a virtual machine, configure nested virtualization. For more information, see [nested virtualization](nested-virtualization.md).
-* Access to Windows Admin Center with the Azure IoT Edge extension for Windows Admin Center installed:
+* If you want to install and manage IoT Edge device using Windows Admin Center, make sure you have access to Windows Admin Center and have the Azure IoT Edge extension installed:
- 1. Download the [Windows Admin Center installer](https://aka.ms/wacdownload).
-
- 1. Run the downloaded installer and follow the install wizard prompts to install Windows Admin Center.
+ 1. Download and run the [Windows Admin Center installer](https://aka.ms/wacdownload). Follow the install wizard prompts to install Windows Admin Center.
1. Once installed, use a supported browser to open Windows Admin Center. Supported browsers include Microsoft Edge (Windows 10, version 1709 or later), Google Chrome, and Microsoft Edge Insider. 1. On the first use of Windows Admin Center, you will be prompted to select a certificate to use. Select **Windows Admin Center Client** as your certificate.
- 1. It is time to install the Azure IoT Edge extension. Select the gear icon in the top right of the Windows Admin Center dashboard.
+ 1. Install the Azure IoT Edge extension. Select the gear icon in the top right of the Windows Admin Center dashboard.
![Select the gear icon in the top right of the dashboard to access the settings.](./media/how-to-install-iot-edge-on-windows/select-gear-icon.png)
This article lists the steps to set up IoT Edge on a Windows device. These steps
1. After the installation completes, you should see Azure IoT Edge in the list of installed extensions on the **Installed extensions** tab.
+* If you want to use **GPU-accelerated Linux modules** in your Azure IoT Edge for Linux on Windows deployment, there are several configuration options to consider. You will need to install the correct drivers depending on your GPU architecture, and you may need access to a Windows Insider Program build. To determine your configuration needs and satisfy these prerequisites, see [GPU acceleration for Azure IoT Edge for Linux on Windows](gpu-acceleration.md).
+ ## Choose your provisioning method Azure IoT Edge for Linux on Windows supports the following provisioning methods:
-* Manual provisioning using your IoT Edge device's connection string. To use this method, register your device and retrieve a connection string using the steps in [Register an IoT Edge device in IoT Hub](how-to-register-device.md).
- * Choose the symmetric key authentication option, as X.509 self-signed certificates are not currently supported by IoT Edge for Linux on Windows.
-* Automatic provisioning using Device Provisioning Service (DPS) and symmetric keys. Learn more about [creating and provisioning an IoT Edge device with DPS and symmetric keys](how-to-auto-provision-symmetric-keys.md).
-* Automatic provisioning using DPS and X.509 certificates. Learn more about [creating and provisioning an IoT Edge device with DPS and X.509 certificates](how-to-auto-provision-x509-certs.md).
-
-Manual provisioning is easier to get started with a few devices. The Device Provisioning Service is helpful for provisioning many devices.
-
-If you plan on using one of the DPS methods to provision your device or devices, follow the steps in the appropriate article linked above to create an instance of DPS, link your DPS instance to your IoT Hub, and create a DPS enrollment. You can create an *individual enrollment* for a single device or a *group enrollment* for a group of devices. For more information about the enrollment types, visit the [Azure IoT Hub Device Provisioning Service concepts](../iot-dps/concepts-service.md#enrollment).
-
-## Create a new deployment
-
-Create your deployment of Azure IoT Edge for Linux on Windows on your target device.
-
-# [Windows Admin Center](#tab/windowsadmincenter)
-
-On the Windows Admin Center start page, under the list of connections, you will see a local host connection representing the PC where you running Windows Admin Center. Any additional servers, PCs, or clusters that you manage will also show up here.
+* **Manual provisioning** for a single device.
-You can use Windows Admin Center to make install and manage Azure IoT Edge for Linux on Windows on either your local device or remote managed devices. In this guide, the local host connection will serve as the target device for the deployment of Azure IoT Edge for Linux on Windows.
-
-If you want to deploy to a remote target device instead of your local device and you do not see your desired target device in the list, follow the [instructions to add your device.](/windows-server/manage/windows-admin-center/use/get-started#connecting-to-managed-nodes-and-clusters).
+ * To prepare for manual provisioning, follow the steps in [Register an IoT Edge device in IoT Hub](how-to-register-device.md). Choose either symmetric key authentication or X.509 certificate authentication, then return to this article to install and provision IoT Edge.
- ![Initial Windows Admin Center dashboard with target device listed](./media/how-to-install-iot-edge-on-windows/windows-admin-center-initial-dashboard.png)
+* **Automatic provisioning** using the IoT Hub Device Provisioning Service (DPS) for one or many devices.
-1. Select **Add**.
+ * Choose the authentication method you want to use, and then follow the steps in the appropriate article to set up an instance of DPS and create an enrollment to provision your device or devices. For more information about the enrollment types, visit the [Azure IoT Hub Device Provisioning Service concepts](../iot-dps/concepts-service.md#enrollment).
-1. On the **Add or create resources** pane, locate the **Azure IoT Edge** tile. Select **Create new** to install a new instance of Azure IoT Edge for Linux on Windows on a device.
+ * [Provision an IoT Edge device with DPS and symmetric keys.](how-to-auto-provision-symmetric-keys.md)
+ * [Provision an IoT Edge device with DPS and X.509 certificates.](how-to-auto-provision-x509-certs.md)
+ * [Provision an IoT Edge device with DPS and TPM attestation.](how-to-auto-provision-tpm-linux-on-windows.md)
- If you already have IoT Edge for Linux on Windows running on a device, you could select **Add** to connect to that existing IoT Edge device and manage it with Windows Admin Center.
-
- ![Select Create New on Azure IoT Edge tile in Windows Admin Center](./media/how-to-install-iot-edge-on-windows/resource-creation-tiles.png)
-
-1. The **Create an Azure IoT Edge for Linux on Windows deployment** pane will open. On the **1. Getting Started** tab, verify that your target device meets the minimum requirements, and select **Next**.
-
-1. Review the license terms, check **I Accept**, and select **Next**.
-
-1. You can toggle **Optional diagnostic data** on or off, depending on your preference.
-
-1. Select **Next: Deploy**.
-
- ![Select the Next: Deploy button after toggling optional diagnostic data to your preference](./media/how-to-install-iot-edge-on-windows/select-next-deploy.png)
-
-1. On the **2. Deploy** tab, under **Select a target device**, click on your listed device to validate it meets the minimum requirements. Once its status is confirmed as supported, select **Next**.
-
- ![Select your device to verify it is supported](./media/how-to-install-iot-edge-on-windows/evaluate-supported-device.png)
-
-1. On the **2.2 Settings** tab, review the configuration settings of your deployment. Once you are satisfied with the settings, select **Next**.
-
- ![Review the configuration settings of your deployment](./media/how-to-install-iot-edge-on-windows/default-deployment-configuration-settings.png)
-
- >[!NOTE]
- >If you are using a Windows virtual machine, it is recommended to use a default switch rather than an external switch to ensure the Linux virtual machine created in the deployment can obtain an IP address.
- >
- >Using a default switch assigns the Linux virtual machine an internal IP address. This internal IP address cannot be reached from outside the Windows virtual machine, but it can be connected to locally while logged onto the Windows virtual machine.
- >
- >If you are using Windows Server, please note that Azure IoT Edge for Linux on Windows does not automatically support the default switch. For a local Windows Server virtual machine, ensure the Linux virtual machine can obtain an IP address through the external switch. For a Windows Server virtual machine in Azure, set up an internal switch before deploying IoT Edge for Linux on Windows.
-
-1. On the **2.3 Deployment** tab, you can watch the progress of the deployment. The full process includes downloading the Azure IoT Edge for Linux on Windows package, installing the package, configuring the host device, and setting up the Linux virtual machine. This process may take several minutes to complete. A successful deployment is pictured below.
-
- ![A successful deployment will show each step with a green check mark and a 'Complete' label](./media/how-to-install-iot-edge-on-windows/successful-deployment.png)
+## Create a new deployment
-Once your deployment is complete, you are ready to provision your device. Select **Next: Connect** to proceed to the **3. Connect** tab, which handles Azure IoT Edge device provisioning.
+Deploy Azure IoT Edge for Linux on Windows on your target device.
# [PowerShell](#tab/powershell) Install IoT Edge for Linux on Windows onto your target device if you have not already. > [!NOTE]
-> The following PowerShell process outlines how to create a local host deployment of Azure IoT Edge for Linux on Windows. To create a deployment to a remote target device using PowerShell, you can use [Remote PowerShell](/powershell/module/microsoft.powershell.core/about/about_remote) to establish a connection to a remote device and run these commands remotely on that device.
+> The following PowerShell process outlines how to deploy IoT Edge for Linux on Windows onto the local device. To deploy to a remote target device using PowerShell, you can use [Remote PowerShell](/powershell/module/microsoft.powershell.core/about/about_remote) to establish a connection to a remote device and run these commands remotely on that device.
1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows.
- ```azurepowershell-interactive
+ ```powershell
$msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi')) $ProgressPreference = 'SilentlyContinue' ΓÇïInvoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath
Install IoT Edge for Linux on Windows onto your target device if you have not al
1. Install IoT Edge for Linux on Windows on your device.
- ```azurepowershell-interactive
+ ```powershell
Start-Process -Wait msiexec -ArgumentList "/i","$([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))","/qn" ```
- > [!NOTE]
- > You can specify custom IoT Edge for Linux on Windows installation and VHDX directories by adding the INSTALLDIR="<FULLY_QUALIFIED_PATH>" and VHDXDIR="<FULLY_QUALIFIED_PATH>" parameters to the install command above.
+ You can specify custom IoT Edge for Linux on Windows installation and VHDX directories by adding `INSTALLDIR="<FULLY_QUALIFIED_PATH>"` and `VHDXDIR="<FULLY_QUALIFIED_PATH>"` parameters to the install command.
-1. For the deployment to run successfully, you need to set the execution policy on the target device to `AllSigned` if it is not already. You can check the current execution policy in an elevated PowerShell prompt using:
+1. Set the execution policy on the target device to `AllSigned` if it is not already. You can check the current execution policy in an elevated PowerShell prompt using:
- ```azurepowershell-interactive
+ ```powershell
Get-ExecutionPolicy -List ``` If the execution policy of `local machine` is not `AllSigned`, you can set the execution policy using:
- ```azurepowershell-interactive
+ ```powershell
Set-ExecutionPolicy -ExecutionPolicy AllSigned -Force ``` 1. Create the IoT Edge for Linux on Windows deployment.
- ```azurepowershell-interactive
+ ```powershell
Deploy-Eflow ```
- > [!NOTE]
- > You can run this command without parameters or optionally customize deployment with parameters. You can refer to [the IoT Edge for Linux on Windows PowerShell script reference](reference-iot-edge-for-linux-on-windows-functions.md#deploy-eflow) to see parameter meaningΓÇïs and default values.
+ The `Deploy-Eflow` command takes optional parameters that help you customize your deployment.
-1. Enter 'Y' to accept the license terms.
-
-1. Enter 'O' or 'R' to toggle **Optional diagnostic data** on or off, depending on your preference. A successful deployment is pictured below.
+ You can assign a GPU to your deployment to enable GPU-accelerated Linux modules. To gain access to these features, you will need to install the prerequisites detailed in [GPU acceleration for Azure IoT Edge for Linux on Windows](gpu-acceleration.md).
- ![A successful deployment will say 'Deployment successful' at the end of the messages](./media/how-to-install-iot-edge-on-windows/successful-powershell-deployment.png)
+ To use a GPU passthrough, you will need add the **gpuName**, **gpuPassthroughType**, and **gpuCount** parameters to your `Deploy-Eflow` command. For information about all the optional parameters available, see [PowerShell functions for IoT Edge for Linux on Windows](reference-iot-edge-for-linux-on-windows-functions.md#deploy-eflow).
-Once your deployment is complete, you are ready to provision your device.
+ >[!WARNING]
+ >Enabling hardware device passthrough may increase security risks. Microsoft recommends a device mitigation driver from your GPU's vendor, when applicable. For more information, see [Deploy graphics devices using discrete device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda).
-
-To provision your device, you can follow the links below to jump to the section for your selected provisioning method:
-* [Option 1: Manual provisioning using your IoT Edge device's connection string](#option-1-provisioning-manually-using-the-connection-string)
-* [Option 2: Automatic provisioning using Device Provisioning Service (DPS) and symmetric keys](#option-2-provisioning-via-dps-using-symmetric-keys)
-* [Option 3: Automatic provisioning using DPS and X.509 certificates](#option-3-provisioning-via-dps-using-x509-certificates)
+1. Enter 'Y' to accept the license terms.
-## Provision your device
+1. Enter 'O' or 'R' to toggle **Optional diagnostic data** on or off, depending on your preference.
-Choose a method for provisioning your device and follow the instructions in the appropriate section. You can use the Windows Admin Center or an elevated PowerShell session to provision your devices.
+1. Once the deployment is complete, the PowerShell window reports **Deployment successful**.
-### Option 1: Provisioning manually using the connection string
+ ![A successful deployment will say 'Deployment successful' at the end of the messages](./media/how-to-install-iot-edge-on-windows/successful-powershell-deployment-2.png)
-This section covers provisioning your device manually using your Azure IoT Edge device's connection string.
+Once your deployment is complete, you are ready to provision your device.
# [Windows Admin Center](#tab/windowsadmincenter)
-1. On the **Azure IoT Edge device provisioning** pane, select **Connection String (Manual)** from the provisioning method dropdown.
+>[!NOTE]
+>The Azure IoT Edge extension for Windows Admin Center is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Installation and management processes may be different than for generally available features.
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to the **IoT Edge** tab of your IoT Hub.
+On the Windows Admin Center start page, under the list of connections, you will see a local host connection representing the PC where you are running Windows Admin Center. Any additional servers, PCs, or clusters that you manage will also show up here.
-1. Click on the device ID of your device. Copy the **Primary Connection String** field.
+You can use Windows Admin Center to install and manage Azure IoT Edge for Linux on Windows on either your local device or remote managed devices. In this guide, the local host connection will serve as the target device for the deployment of Azure IoT Edge for Linux on Windows.
-1. Paste it into the device connection string field in the Windows Admin Center. Then, choose **Provisioning with the selected method**.
+If you want to deploy to a remote target device instead of your local device and you do not see your desired target device in the list, follow the [instructions to add your device.](/windows-server/manage/windows-admin-center/use/get-started#connecting-to-managed-nodes-and-clusters).
- ![Choose provisioning with the selected method after pasting your device's connection string](./media/how-to-install-iot-edge-on-windows/provisioning-with-selected-method-connection-string.png)
+ ![Initial Windows Admin Center dashboard with target device listed](./media/how-to-install-iot-edge-on-windows/windows-admin-center-initial-dashboard.png)
-1. Once the provisioning is complete, select **Finish**. You will be taken back to the main dashboard. Now, you should see a new device listed, whose type is `IoT Edge Devices`. You can select the IoT Edge device to connect to it. Once on its **Overview** page, you can view the **IoT Edge Module List** and **IoT Edge Status** of your device.
+1. Select **Add**.
-# [PowerShell](#tab/powershell)
+1. On the **Add or create resources** pane, locate the **Azure IoT Edge** tile. Select **Create new** to install a new instance of Azure IoT Edge for Linux on Windows on a device.
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to the **IoT Edge** tab of your IoT Hub.
+ If you already have IoT Edge for Linux on Windows running on a device, you could select **Add** to connect to that existing IoT Edge device and manage it with Windows Admin Center.
-1. Click on the device ID of your device. Copy the **Primary Connection String** field.
+ ![Select Create New on Azure IoT Edge tile in Windows Admin Center](./media/how-to-install-iot-edge-on-windows/resource-creation-tiles.png)
-1. Paste over the placeholder text in the following command and run it in an elevated PowerShell session on your target device.
+1. The **Create an Azure IoT Edge for Linux on Windows deployment** pane will open. On the **1. Getting Started** tab, review the minimum requirements and select **Next**.
- ```azurepowershell-interactive
- Provision-EflowVm -provisioningType manual -devConnString "<CONNECTION_STRING_HERE>"ΓÇï
- ```
+1. Review the license terms, check **I Accept**, and select **Next**.
-
+1. You can toggle **Optional diagnostic data** on or off, depending on your preference.
-### Option 2: Provisioning via DPS using symmetric keys
+1. Select **Next: Deploy**.
-This section covers provisioning your device automatically using DPS and symmetric keys.
+ ![Select the Next: Deploy button after toggling optional diagnostic data to your preference](./media/how-to-install-iot-edge-on-windows/select-next-deploy.png)
-# [Windows Admin Center](#tab/windowsadmincenter)
+1. On the **2. Deploy** tab, under **Select a target device**, click on your listed device to validate it meets the minimum requirements. Once its status is confirmed as supported, select **Next**.
-1. On the **Azure IoT Edge device provisioning** pane, select **Symmetric Key (DPS)** from the provisioning method dropdown.
+ ![Select your device to verify it is supported](./media/how-to-install-iot-edge-on-windows/evaluate-supported-device.png)
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your DPS instance.
+1. On the **2.2 Settings** tab, review the configuration settings of your deployment.
-1. On the **Overview** tab, copy the **ID Scope** value. Paste it into the scope ID field in the Windows Admin Center.
+ >[!NOTE]
+ >IoT Edge for Linux on Windows uses a default switch, which assigns the Linux virtual machine an internal IP address. This internal IP address cannot be reached from outside the Windows machine. You can connect to the virtual machine locally while logged onto the Windows machine.
+ >
+ >If you are using Windows Server, set up a default switch before deploying IoT Edge for Linux on Windows.
-1. On the **Manage enrollments** tab in the Azure portal, select the enrollment you created. Copy the **Primary Key** value in the enrollment details. Paste it into the symmetric key field in the Windows Admin Center.
+ You can assign a GPU to your deployment to enable GPU-accelerated Linux modules. To gain access to these features, you will need to install the prerequisites detailed in [GPU acceleration for Azure IoT Edge for Linux on Windows](gpu-acceleration.md). If you are only installing these prerequisites at this point in the deployment process, you will need to start again from the beginning.
-1. Provide the registration ID of your device in the registration ID field in the Windows Admin Center.
+ There are two options for GPU passthrough available: **Direct Device Assignment (DDA)** and **GPU Paravirtualization (GPU-PV)**, depending on the GPU adaptor you assign to your deployment. Examples of each method are shown below.
-1. Choose **Provisioning with the selected method**.
+ For the direct device assignment method, select the number of GPU processors to allocate to your Linux virtual machine.
- ![Choose provisioning with the selected method after filling in the required fields for symmetric key provisioning](./media/how-to-install-iot-edge-on-windows/provisioning-with-selected-method-symmetric-key.png)
+ ![Configuration settings with a direct device assignment GPU enabled.](./media/how-to-install-iot-edge-on-windows/gpu-passthrough-direct-device-assignment.png)
-1. Once the provisioning is complete, select **Finish**. You will be taken back to the main dashboard. Now, you should see a new device listed, whose type is `IoT Edge Devices`. You can select the IoT Edge device to connect to it. Once on its **Overview** page, you can view the **IoT Edge Module List** and **IoT Edge Status** of your device.
+ For the paravirtualization method, no additional settings are needed.
-# [PowerShell](#tab/powershell)
+ ![Configuration settings with a paravirtualization GPU enabled.](./media/how-to-install-iot-edge-on-windows/gpu-passthrough-paravirtualization.png)
-1. Copy the following command into a text editor. Replace the placeholder text with your information as detailed.
+ >[!WARNING]
+ >Enabling hardware device passthrough may increase security risks. Microsoft recommends a device mitigation driver from your GPU's vendor, when applicable. For more information, see [Deploy graphics devices using discrete device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda).
- ```azurepowershell-interactive
- Provision-EflowVm -provisioningType symmetric -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <REGISTRATION_ID_HERE> -symmKey <PRIMARY_KEY_HERE>
- ```
+ Once you are satisfied with the settings, select **Next**.
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your DPS instance.
+1. On the **2.3 Deployment** tab, you can watch the progress of the deployment. The full process includes downloading the Azure IoT Edge for Linux on Windows package, installing the package, configuring the host device, and setting up the Linux virtual machine. This process may take several minutes to complete. A successful deployment is pictured below.
-1. On the **Overview** tab, copy the **ID Scope** value. Paste it over the appropriate placeholder text in the command.
+ ![A successful deployment will show each step with a green check mark and a 'Complete' label](./media/how-to-install-iot-edge-on-windows/successful-deployment.png)
-1. On the **Manage enrollments** tab in the Azure portal, select the enrollment you created. Copy the **Primary Key** value in the enrollment details. Paste it over the appropriate placeholder text in the command.
+Once your deployment is complete, you are ready to provision your device. Select **Next: Connect** to proceed to the **3. Connect** tab, which handles Azure IoT Edge device provisioning.
-1. Provide the registration ID of the device to replace the appropriate placeholder text in the command.
+
-1. Run the command in an elevated PowerShell session on the target device.
+## Provision your device
-
+Choose a method for provisioning your device and follow the instructions in the appropriate section. This article provides the steps for manually provisioning your device with either symmetric keys or X.509 certificates. If you are using automatic provisioning with DPS, follow the appropriate links to complete provisioning.
-### Option 3: Provisioning via DPS using X.509 certificates
+You can use the Windows Admin Center or an elevated PowerShell session to provision your devices.
-This section covers provisioning your device automatically using DPS and X.509 certificates.
+* Manual provisioning:
-# [Windows Admin Center](#tab/windowsadmincenter)
+ * [Manual provisioning using your IoT Edge device's connection string](#manual-provisioning-using-the-connection-string)
+ * [Manual provisioning using X.509 certificates](#manual-provisioning-using-x509-certificates)
-1. On the **Azure IoT Edge device provisioning** pane, select **X.509 Certificate (DPS)** from the provisioning method dropdown.
+* Automatic provisioning:
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your DPS instance.
+ * [Automatic provisioning using Device Provisioning Service (DPS) and symmetric keys](how-to-auto-provision-symmetric-keys.md?tabs=eflow#configure-the-device-with-provisioning-information)
+ * [Automatic provisioning using DPS and X.509 certificates](how-to-auto-provision-x509-certs.md?tabs=eflow#configure-the-device-with-provisioning-information)
+ * [Automatic provisioning using DPS and TPM attestation](how-to-auto-provision-tpm-linux-on-windows.md#configure-the-device-with-provisioning-information)
-1. On the **Overview** tab, copy the **ID Scope** value. Paste it into the scope ID field in the Windows Admin Center.
+### Manual provisioning using the connection string
-1. Provide the registration ID of your device in the registration ID field in the Windows Admin Center.
+This section covers provisioning your device manually using your IoT Edge device's connection string.
-1. Upload your certificate and private key files.
+If you haven't already, follow the steps in [Register an IoT Edge device in IoT Hub](how-to-register-device.md) to register your device and retrieve its connection string.
-1. Choose **Provisioning with the selected method**.
+# [PowerShell](#tab/powershell)
- ![Choose provisioning with the selected method after filling in the required fields for X.509 certificate provisioning](./media/how-to-install-iot-edge-on-windows/provisioning-with-selected-method-x509-certs.png)
+Run the following command in an elevated PowerShell session on your target device. Replace the placeholder text with your own values.
-1. Once the provisioning is complete, select **Finish**. You will be taken back to the main dashboard. Now, you should see a new device listed, whose type is `IoT Edge Devices`. You can select the IoT Edge device to connect to it. Once on its **Overview** page, you can view the **IoT Edge Module List** and **IoT Edge Status** of your device.
+```powershell
+Provision-EflowVm -provisioningType ManualConnectionString -devConnString "<CONNECTION_STRING_HERE>"ΓÇï
+```
-# [PowerShell](#tab/powershell)
+For more information about the `Provision-EflowVM` command, see [PowerShell functions for IoT Edge for Linux on Windows](reference-iot-edge-for-linux-on-windows-functions.md#provision-eflowvm).
-1. Copy the following command into a text editor. Replace the placeholder text with your information as detailed.
+# [Windows Admin Center](#tab/windowsadmincenter)
- ```azurepowershell-interactive
- Provision-EflowVm -provisioningType x509 -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <REGISTRATION_ID_HERE> -identityCertLocWin <ABSOLUTE_CERT_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityPkLocWin <ABSOLUTE_PRIVATE_KEY_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityCertLocVm <ABSOLUTE_CERT_DEST_PATH_ON_LINUX_MACHINE -identityPkLocVm <ABSOLUTE_PRIVATE_KEY_DEST_PATH_ON_LINUX_MACHINE>
- ```
+1. On the **Azure IoT Edge device provisioning** pane, select **Connection String (Manual)** from the provisioning method dropdown.
-1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your DPS instance.
+1. In the [Azure portal](https://ms.portal.azure.com/), navigate to the **IoT Edge** tab of your IoT Hub.
-1. On the **Overview** tab, copy the **ID Scope** value. Paste it over the appropriate placeholder text in the command.
+1. Click on the device ID of your device. Copy the **Primary Connection String** field.
-1. Provide the registration ID of the device to replace the appropriate placeholder text in the command.
+1. Provide the **Device connection string** that you retrieved from IoT Hub after registering the device.
-1. Replace the appropriate placeholder text with the absolute source path to your certificate file.
+1. Select **Provisioning with the selected method**.
-1. Replace the appropriate placeholder text with the absolute source path to your private key file.
+ ![Choose provisioning with the selected method after pasting your device's connection string](./media/how-to-install-iot-edge-on-windows/provisioning-with-selected-method-connection-string.png)
-1. Run the command in an elevated PowerShell session on the target device.
+1. Once the provisioning is complete, select **Finish**. You will be taken back to the main dashboard. Now, you should see a new device listed with the type `IoT Edge Devices`. You can select the IoT Edge device to connect to it. Once on its **Overview** page, you can view the **IoT Edge Module List** and **IoT Edge Status** of your device.
-## Verify successful configuration
+### Manual provisioning using X.509 certificates
-Verify that IoT Edge for Linux on Windows was successfully installed and configured on your IoT Edge device.
+This section covers provisioning your device manually using X.509 certificates on your IoT Edge device.
+
+If you haven't already, follow the steps in [Register an IoT Edge device in IoT Hub](how-to-register-device.md) to prepare the necessary certificates and register your device.
+
+# [PowerShell](#tab/powershell)
+
+Have the device identity certificate and its matching private key ready on your target device. Know the absolute path to both files.
+
+Run the following command in an elevated PowerShell session on your target device. Replace the placeholder text with your own values.
+
+```powershell
+Provision-EflowVm -provisioningType ManualX509 -iotHubHostname "<HUB HOSTNAME>" -deviceId "<DEVICE ID>" -identityCertPath "<ABSOLUTE PATH TO IDENTITY CERT>" -identityPrivKeyPath "<ABSOLUTE PATH TO PRIVATE KEY>"
+```
+
+For more information about the `Provision-EflowVM` command, see [PowerShell functions for IoT Edge for Linux on Windows](reference-iot-edge-for-linux-on-windows-functions.md#provision-eflowvm).
# [Windows Admin Center](#tab/windowsadmincenter)
-1. Select your IoT Edge device from the list of connected devices in Windows Admin Center to connect to it.
+1. On the **Azure IoT Edge device provisioning** pane, select **ManualX509** from the provisioning method dropdown.
-1. The device overview page displays some information about the device:
+ ![Choose manual provisioning with X.509 certificates](./media/how-to-install-iot-edge-on-windows/provisioning-with-selected-method-manual-x509.png)
- 1. The **IoT Edge Module List** section shows running modules on the device. When the IoT Edge service starts for the first time, you should only see the **edgeAgent** module running. The edgeAgent module runs by default and helps to install and start any additional modules that you deploy to your device.
- 1. The **IoT Edge Status** section shows the service status, and should be reporting **active (running)**.
+1. Provide the required parameters:
-1. If you need to troubleshoot the IoT Edge service, use the **Command Shell** tool on the device page to ssh (secure shell) into the virtual machine and run the Linux commands.
+ * **IoT Hub Hostname**: The name of the IoT hub that this device is registered to.
+ * **Device ID**: The name that this device is registered with.
+ * **Certificate file**: Upload the device identity certificate, which will be moved to the virtual machine and used to provision the device.
+ * **Private key file**: Upload the matching private key file, which will be moved to the virtual machine and used to provision the device.
- 1. If you need to troubleshoot the service, retrieve the service logs.
+1. Select **Provisioning with the selected method**.
- ```bash
- journalctl -u iotedge
- ```
+1. Once the provisioning is complete, select **Finish**. You will be taken back to the main dashboard. Now, you should see a new device listed with the type `IoT Edge Devices`. You can select the IoT Edge device to connect to it. Once on its **Overview** page, you can view the **IoT Edge Module List** and **IoT Edge Status** of your device.
- 2. Use the `check` tool to verify configuration and connection status of the device.
+
- ```bash
- sudo iotedge check
- ```
+## Verify successful configuration
+
+Verify that IoT Edge for Linux on Windows was successfully installed and configured on your IoT Edge device.
# [PowerShell](#tab/powershell) 1. Log in to your IoT Edge for Linux on Windows virtual machine using the following command in your PowerShell session:
- ```azurepowershell-interactive
- Ssh-EflowVm
+ ```powershell
+ Connect-EflowVm
``` >[!NOTE]
Verify that IoT Edge for Linux on Windows was successfully installed and configu
1. Once you are logged in, you can check the list of running IoT Edge modules using the following Linux command: ```bash
- iotedge list
+ sudo iotedge list
``` 1. If you need to troubleshoot the IoT Edge service, use the following Linux commands.
Verify that IoT Edge for Linux on Windows was successfully installed and configu
1. If you need to troubleshoot the service, retrieve the service logs. ```bash
- journalctl -u iotedge
+ sudo journalctl -u iotedge
``` 2. Use the `check` tool to verify configuration and connection status of the device.
Verify that IoT Edge for Linux on Windows was successfully installed and configu
sudo iotedge check ```
+# [Windows Admin Center](#tab/windowsadmincenter)
+
+1. Select your IoT Edge device from the list of connected devices in Windows Admin Center to connect to it.
+
+1. The device overview page displays some information about the device:
+
+ * The **IoT Edge Module List** section shows running modules on the device. When the IoT Edge service starts for the first time, you should only see the **edgeAgent** module running. The edgeAgent module runs by default and helps to install and start any additional modules that you deploy to your device.
+
+ * The **IoT Edge Status** section shows the service status, and should be reporting **active (running)**.
+ ## Next steps
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-update-iot-edge.md
keywords:
Previously updated : 04/07/2021 Last updated : 06/15/2021
-# Update the IoT Edge security daemon and runtime
+# Update IoT Edge
[!INCLUDE [iot-edge-version-201806-or-202011](../../includes/iot-edge-version-201806-or-202011.md)]
The IoT Edge security daemon is a native component that needs to be updated usin
Check the version of the security daemon running on your device by using the command `iotedge version`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the version.
+# [Linux](#tab/linux)
+ >[!IMPORTANT] >If you are updating a device from version 1.0 or 1.1 to version 1.2, there are differences in the installation and configuration processes that require extra steps. For more information, refer to the steps later in this article: [Special case: Update from 1.0 or 1.1 to 1.2](#special-case-update-from-10-or-11-to-12).
-# [Linux](#tab/linux)
- On Linux x64 devices, use apt-get or your appropriate package manager to update the security daemon to the latest version. Get the latest repository configuration from Microsoft:
If you want to update to the most recent version of IoT Edge, use the following
:::moniker-end <!-- end 1.2 -->
-With IoT Edge for Linux on Windows, IoT Edge runs in a Linux virtual machine hosted on a Windows device. This virtual machine is pre-installed with IoT Edge, and it is managed with Microsoft Update to keep the components up to date automatically.
+<!-- 1.1 -->
+
+>[!IMPORTANT]
+>If you are updating a device from the public preview version of IoT Edge for Linux on Windows to the generally available version, you need to uninstall and reinstall Azure IoT Edge.
+>
+>To find out if you're currently using the public preview version, navigate to **Settings** > **Apps** on your Windows device. Find **Azure IoT Edge** in the list of apps and features. If your listed version is 1.0.x, you are running the public preview version. Uninstall the app and then [Install and provision IoT Edge for Linux on Windows](how-to-install-iot-edge-on-windows.md) again. If your listed version is 1.1.x, you are running the generally available version and can receive updates through Microsoft Update.
+
+With IoT Edge for Linux on Windows, IoT Edge runs in a Linux virtual machine hosted on a Windows device. This virtual machine is pre-installed with IoT Edge, and you cannot manually update or change the IoT Edge components. Instead, the virtual machine is managed with Microsoft Update to keep the components up to date automatically.
To receive IoT Edge for Linux on Windows updates, the Windows host should be configured to receive updates for other Microsoft products. You can turn this option with the following steps:
To receive IoT Edge for Linux on Windows updates, the Windows host should be con
1. Toggle the *Receive updates for other Microsoft products when you update Windows* button to **On**.
+<!-- end 1.1 -->
+ # [Windows](#tab/windows) <!-- 1.2 -->
To receive IoT Edge for Linux on Windows updates, the Windows host should be con
:::moniker-end <!-- end 1.2 -->
+<!-- 1.1 -->
+ With IoT Edge for Windows, IoT Edge runs directly on the Windows device. For update instructions using the PowerShell scripts, see [Install and manage Azure IoT Edge for Windows](how-to-install-iot-edge-windows-on-windows.md).
+<!-- end 1.1 -->
+ ## Update the runtime containers
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-for-linux-on-windows.md
Previously updated : 01/20/2021 Last updated : 06/18/2021 monikerRange: "=iotedge-2018-06"
-# What is Azure IoT Edge for Linux on Windows (Preview)
+# What is Azure IoT Edge for Linux on Windows
[!INCLUDE [iot-edge-version-201806](../../includes/iot-edge-version-201806.md)]
Azure IoT Edge for Linux on Windows allows you to run containerized Linux worklo
IoT Edge for Linux on Windows works by running a Linux virtual machine on a Windows device. The Linux virtual machine comes pre-installed with the IoT Edge runtime. Any IoT Edge modules deployed to the device run inside the virtual machine. Meanwhile, Windows applications running on the Windows host device can communicate with the modules running in the Linux virtual machine.
-[Get started](how-to-install-iot-edge-on-windows.md) with the preview today.
-
->[!NOTE]
->Please consider taking our [Product survey](https://aka.ms/AzEFLOW-Registration) to help us improve Azure IoT Edge for Linux on Windows based on your IoT Edge background and goals. You can also use this survey to sign up for future Azure IoT Edge for Linux on Windows announcements.
+[Get started](how-to-install-iot-edge-on-windows.md) today.
## Components IoT Edge for Linux on Windows uses the following components to enable Linux and Windows workloads to run alongside each other and communicate seamlessly:
-* **A Linux virtual machine running Azure IoT Edge**: A Linux virtual machine, based on Microsoft's first party [Mariner](https://github.com/microsoft/CBL-Mariner) operating system, is built with the IoT Edge runtime and validated as a tier 1 supported environment for IoT Edge workloads.
+* **A Linux virtual machine running Azure IoT Edge**: A Linux virtual machine, based on Microsoft's first party [CBL-Mariner](https://github.com/microsoft/CBL-Mariner) operating system, is built with the IoT Edge runtime and validated as a tier 1 supported environment for IoT Edge workloads.
* **Windows Admin Center**: An IoT Edge extension for Windows Admin Center facilitates installation, configuration, and diagnostics of IoT Edge on the Linux virtual machine. Windows Admin Center can deploy IoT Edge for Linux on Windows on the local device, or can connect to target devices and manage them remotely.
-* **Microsoft Update**: Integration with Microsoft Update keeps the Windows runtime components, the Mariner Linux VM, and IoT Edge up to date.
+* **Microsoft Update**: Integration with Microsoft Update keeps the Windows runtime components, the CBL-Mariner Linux VM, and IoT Edge up to date.
![Windows and the Linux VM run in parallel, while the Windows Admin Center controls both components](./media/iot-edge-for-linux-on-windows/architecture-and-communication.png)
IoT Edge for Linux on Windows emphasizes interoperability between the Linux and
For samples that demonstrate communication between Windows applications and IoT Edge modules, see [EFLOW & Windows 10 IoT Samples](https://aka.ms/AzEFLOW-Samples).
-## Public preview
-
-IoT Edge for Linux on Windows is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Installation and management processes may be different than for generally available features.
- ## Support Use the IoT Edge support and feedback channels to get assistance with IoT Edge for Linux on Windows.
-**Reporting bugs** ΓÇô Bugs can be reported on the [issues page](https://github.com/azure/iotedge/issues) of the IoT Edge open-source project. Bugs related to Azure IoT Edge for Linux on Windows can be reported on the [iotedge-eflow issues page](https://aka.ms/AzEFLOW-Issues).
+**Reporting bugs** ΓÇô Bugs related to Azure IoT Edge for Linux on Windows can be reported on the [iotedge-eflow issues page](https://aka.ms/AzEFLOW-Issues). Bugs related to IoT Edge can be reported on the [issues page](https://github.com/azure/iotedge/issues) of the IoT Edge open-source project.
**Microsoft Customer Support team** ΓÇô Users who have a [support plan](https://azure.microsoft.com/support/plans/) can engage the Microsoft Customer Support team by creating a support ticket directly from the [Azure portal](https://ms.portal.azure.com/signin/index/?feature.settingsportalinstance=mpac).
Use the IoT Edge support and feedback channels to get assistance with IoT Edge f
## Next steps
-Watch [IoT Edge for Linux on Windows 10 IoT Enterprise](https://aka.ms/EFLOWPPC9) for more information and a sample in action.
+Watch [IoT Edge for Linux on Windows 10 IoT Enterprise](https://aka.ms/azeflow-show) for more information and a sample in action.
Follow the steps in [Install and provision Azure IoT Edge for Linux on a Windows device](how-to-install-iot-edge-on-windows.md) to set up a device with IoT Edge for Linux on Windows.
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/nested-virtualization.md
There are two forms of nested virtualization compatible with Azure IoT Edge for
> Ensure to enable one [networking option](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#networking-options) for nested virtualization. Failing to do so will result in EFLOW installation errors. ## Deployment on local VM+ This is the baseline approach for any Windows VM that hosts Azure IoT Edge for Linux on Windows. For this case, nested virtualization needs to be enabled before starting the deployment. Read [Run Hyper-V in a Virtual Machine with Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization) for more information on how to configure this scenario. If you are using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server). ## Deployment on Azure VMs
-If you choose to deploy on Azure VMs, note that there is no external switch by design. Azure IoT Edge for Linux on Windows is also not compatible on an Azure VM running the Server SKU unless a specific script is executed that brings up a default switch. For more information on how to bring up a default switch, see the [Windows Server section](#windows-server) below.
+
+Azure IoT Edge for Linux on Windows is not compatible on an Azure VM running the Server SKU unless a script is executed that brings up a default switch. For more information on how to bring up a default switch, see the [Windows Server section](#windows-server) below.
> [!NOTE] > > Any Azure VMs that is supposed to host EFLOW must be a VM that [supports nested virtualization](../virtual-machines/acu.md) -
-## Limited use of external switch
-In any scenario where the VM cannot obtain an IP address through an external switch, the deployment functionality automatically uses the internal default switch for the deployment. This means that the management of the Azure IoT Edge for Linux VM can only be executed on the target device itself (i.e. connecting to the Azure IoT Edge for Linux VM via the WAC PowerShell SSH extension is only possible on localhost).
- ## Windows Server
-For Windows Server users, note that Azure IoT Edge for Linux on Windows does not automatically support the default switch.
-
-* Consequences for local VM: Ensure the EFLOW VM can obtain an IP address through the external switch.
-* Consequences for Azure VM: Because there's no external switch on Azure VMs, it's not possible to deploy EFLOW before you set up an internal switch on the server.
+For Windows Server users, note that Azure IoT Edge for Linux on Windows does not automatically support the default switch. Before you can deploy IoT Edge for Linux on Windows you must set up an internal switch on the server.
-There is no default switch on Server SKUs by default (regardless of whether the Server SKU runs on an Azure VM or not). When deploying on an Azure VM where the external switch cannot be used, the default switch needs to be set up and configured manually before Azure IoT Edge for Linux on Windows deployment. Our deployment functionality covers this as it requires IP configuration for the internal switch, a NAT configuration, and installing and configuring a DHCP server. Our deployment functionality in public preview states that it does not fiddle around with these settings not to impair network configurations on productive deployments.
+Our deployment functionality does not create the default switch automatically because that requires IP configuration for the internal switch, a NAT configuration, and installing and configuring a DHCP server. Our deployment functionality states that it does not fiddle around with these settings in order to not affect network configurations on productive deployments.
-* Relevant information on how to set up the default switch manually can be found here: [How to enable nested virtualization in Azure Virtual Machines](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization)
-* Documentation on how to set up a DHCP server for this scenario can be found here: [Deploy DHCP using Windows PowerShell](/windows-server/networking/technologies/dhcp/dhcp-deploy-wps)
+* For information about setting up the default switch manually, see [How to enable nested virtualization in Azure Virtual Machines](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization)
+* For information about setting up a DHCP server for this scenario, see [Deploy DHCP using Windows PowerShell](/windows-server/networking/technologies/dhcp/dhcp-deploy-wps)
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart.md
Title: Quickstart to create an Azure IoT Edge device on Windows | Microsoft Docs description: In this quickstart, learn how to create an IoT Edge device and then deploy prebuilt code remotely from the Azure portal.--- Previously updated : 01/20/2021++++ Last updated : 06/18/2021
monikerRange: "=iotedge-2018-06"
-# Quickstart: Deploy your first IoT Edge module to a Windows device (preview)
+# Quickstart: Deploy your first IoT Edge module to a Windows device
[!INCLUDE [iot-edge-version-201806](../../includes/iot-edge-version-201806.md)]
This quickstart walks you through how to set up your Azure IoT Edge for Linux on
If you don't have an active Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.
->[!NOTE]
->IoT Edge for Linux on Windows is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites Prepare your environment for the Azure CLI.
Make sure your IoT Edge device meets the following requirements:
* Minimum Free Disk Space: 10 GB >[!NOTE]
->This quickstart uses Windows Admin Center to create a deployment of IoT Edge for Linux on Windows. You can also use PowerShell. If you wish to use PowerShell to create your deployment, follow the steps in the how-to guide on [installing and provisioning Azure IoT Edge for Linux on a Windows device](how-to-install-iot-edge-on-windows.md).
+>This quickstart uses PowerShell to create a deployment of IoT Edge for Linux on Windows. You can also use Windows Admin Center. If you wish to use Windows Admin Center to create your deployment, follow the steps in the how-to guide on [installing and provisioning Azure IoT Edge for Linux on a Windows device](how-to-install-iot-edge-on-windows.md?tabs=windowsadmincenter).
## Create an IoT hub
Install IoT Edge for Linux on Windows on your device, and configure it with the
![Diagram that shows the step to start the IoT Edge runtime.](./media/quickstart/start-runtime.png)
-1. [Download Windows Admin Center](https://aka.ms/wacdownload).
-
-1. Follow the prompts in the installation wizard to set up Windows Admin Center on your device.
-
-1. Open Windows Admin Center.
-
-1. Select the **Settings gear** icon in the upper-right corner, and then select **Extensions**.
-
-1. On the **Feeds** tab, select **Add**.
-
-1. Enter `https://aka.ms/wac-insiders-feed` into the text box, and then select **Add**.
-
-1. After the feed has been added, go to the **Available extensions** tab and wait for the extensions list to update.
-
-1. From the list of **Available extensions**, select **Azure IoT Edge**.
-
-1. Install the extension.
-
-1. When the extension is installed, select **Windows Admin Center** in the upper-left corner to go to the main dashboard page.
-
- The **localhost** connection represents the PC where you're running Windows Admin Center.
-
- :::image type="content" source="media/quickstart/windows-admin-center-start-page.png" alt-text="Screenshot of the Windows Admin Start page.":::
-
-1. Select **Add**.
-
- :::image type="content" source="media/quickstart/windows-admin-center-start-page-add.png" alt-text="Screenshot that shows selecting the Add button in Windows Admin Center.":::
+Run the following PowerShell commands on the target device where you want to deploy Azure IoT Edge for Linux on Windows. To deploy to a remote target device using PowerShell, use [Remote PowerShell](/powershell/module/microsoft.powershell.core/about/about_remote) to establish a connection to a remote device and run these commands remotely on that device.
-1. On the Azure IoT Edge tile, select **Create new** to start the installation wizard.
+1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows.
- :::image type="content" source="media/quickstart/select-tile-screen.png" alt-text="Screenshot that shows creating a new deployment in the Azure IoT Edge til.":::
-
-1. Continue through the installation wizard to accept the Microsoft Software License Terms, and then select **Next**.
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ ΓÇïInvoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath
+ ```
- :::image type="content" source="media/quickstart/wizard-welcome-screen.png" alt-text="Screenshot that shows selecting Next to continue through the installation wizard.":::
+1. Install IoT Edge for Linux on Windows on your device.
-1. Select **Optional diagnostic data**, and then select **Next: Deploy**. This selection provides extended diagnostics data that helps Microsoft monitor and maintain quality of service.
+ ```powershell
+ Start-Process -Wait msiexec -ArgumentList "/i","$([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))","/qn"
+ ```
- :::image type="content" source="media/quickstart/diagnostic-data-screen.png" alt-text="Screenshot that shows the Diagnostic data options.":::
+1. Set the execution policy on the target device to `AllSigned` if it is not already. You can check the current execution policy in an elevated PowerShell prompt using:
-1. On the **Select target device** screen, select your desired target device to validate that it meets the minimum requirements. For this quickstart, we're installing IoT Edge on the local device, so choose the **localhost** connection. If the target device meets the requirements, select **Next** to continue.
+ ```powershell
+ Get-ExecutionPolicy -List
+ ```
- :::image type="content" source="media/quickstart/wizard-select-target-device-screen.png" alt-text="Screenshot that shows the Target device list.":::
+ If the execution policy of `local machine` is not `AllSigned`, you can set the execution policy using:
-1. ΓÇïSelect **Next** to accept the default settings. The deployment screen shows the process of downloading the package, installing the package, configuring the host, and final setting up the Linux virtual machine (VM)ΓÇï. A successful deployment looks like this:
+ ```powershell
+ Set-ExecutionPolicy -ExecutionPolicy AllSigned -Force
+ ```
- :::image type="content" source="media/quickstart/wizard-deploy-success-screen.png" alt-text="Screenshot of a successful deployment.":::
+1. Create the IoT Edge for Linux on Windows deployment.
-1. Select **Next: Connect** to continue to the final step to provision your Azure IoT Edge device with its device ID from your IoT hub instance.
+ ```powershell
+ Deploy-Eflow
+ ```
-1. Paste the connection string you copied [earlier in this quickstart](#register-an-iot-edge-device) into the **Device connection string** field. Then select **Provisioning with the selected method**ΓÇï.
+1. Enter 'Y' to accept the license terms.
- :::image type="content" source="media/quickstart/wizard-provision.png" alt-text="Screenshot that shows the connection string in the Device connection string field.":::
+1. Enter 'O' or 'R' to toggle **Optional diagnostic data** on or off, depending on your preference. A successful deployment is pictured below.
-1. After provisioning is complete, select **Finish** to complete and return to the Windows Admin Center start screen. You should see your device listed as an IoT Edge device.
+ ![A successful deployment will say 'Deployment successful' at the end of the messages](./media/how-to-install-iot-edge-on-windows/successful-powershell-deployment-2.png)
- :::image type="content" source="media/quickstart/windows-admin-center-device-screen.png" alt-text="Screenshot that shows all connections in Windows Admin Center.":::
+1. Provision your device using the device connection string that you retrieved in the previous section. Replace the placeholder text with your own value.
-1. Select your Azure IoT Edge device to view its dashboardΓÇï. You should see that the workloads from your device twin in Azure IoT Hub have been deployed. The **IoT Edge Module List** should show one module running **edgeAgent**, and the **IoT Edge Status** should be **active (running)**.
+ ```powershell
+ Provision-EflowVm -provisioningType ManualConnectionString -devConnString "<CONNECTION_STRING_HERE>"ΓÇï
+ ```
Your IoT Edge device is now configured. It's ready to run cloud-deployed modules.
In this quickstart, you created a new IoT Edge device and installed the IoT Edge
The module that you pushed generates sample environment data that you can use for testing later. The simulated sensor is monitoring both a machine and the environment around the machine. For example, this sensor might be in a server room, on a factory floor, or on a wind turbine. The messages that it sends include ambient temperature and humidity, machine temperature and pressure, and a timestamp. IoT Edge tutorials use the data created by this module as test data for analytics.
-From the command shell in Windows Admin Center, confirm that the module you deployed from the cloud is running on your IoT Edge device.
-
-1. Connect to your newly created IoT Edge device.
+1. Log in to your IoT Edge for Linux on Windows virtual machine using the following command in your PowerShell session:
- :::image type="content" source="media/quickstart/connect-edge-screen.png" alt-text="Screenshot that shows selecting Connect in Windows Admin Center.":::
-
- On the **Overview** page, you'll see the **IoT Edge Module List** and **IoT Edge Status**. You can see the modules that have been deployed and the device status.
-
-1. Under **Tools**, select **Command Shell**. The command shell is a PowerShell terminal that automatically uses Secure Shell (SSH) to connect to your Azure IoT Edge device's Linux VM on your Windows PC.
+ ```powershell
+ Connect-EflowVm
+ ```
- :::image type="content" source="media/quickstart/command-shell-screen.png" alt-text="Screenshot that shows opening the command shell.":::
+ >[!NOTE]
+ >The only account allowed to SSH to the virtual machine is the user that created it.
-1. To verify the three modules on your device, run the following Bash command:
+1. Once you are logged in, you can check the list of running IoT Edge modules using the following Linux command:
- ```bash
- sudo iotedge list
- ```
+ ```bash
+ sudo iotedge list
+ ```
- :::image type="content" source="media/quickstart/iotedge-list-screen.png" alt-text="Screenshot that shows the command shell I o T edge list output.":::
+ ![Verify your temperature sensor, agent, and hub are running.](./media/quickstart/iotedge-list-screen.png)
-1. View the messages being sent from the temperature sensor module to the cloud.
+1. View the messages being sent from the temperature sensor module to the cloud using the following Linux command:
- ```bash
- iotedge logs SimulatedTemperatureSensor -f
- ```
+ ```bash
+ sudo iotedge logs SimulatedTemperatureSensor -f
+ ```
- >[!Important]
- >IoT Edge commands are case-sensitive when they refer to module names.
+ >[!IMPORTANT]
+ >IoT Edge commands are case-sensitive when they refer to module names.
- :::image type="content" source="media/quickstart/temperature-sensor-screen.png" alt-text="Screenshot that shows the list of messages sent from the module to the cloud.":::
+ ![View the output logs of the Simulated Temperature Sensor module.](./media/quickstart/temperature-sensor-screen.png)
You can also use the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) to watch messages arrive at your IoT hub.
Use the dashboard extension in Windows Admin Center to uninstall Azure IoT Edge
1. Select **Uninstall**. After Azure IoT Edge is removed, Windows Admin Center removes the Azure IoT Edge device connection entry from the **Start** page. >[!Note]
->Another way to remove Azure IoT Edge from your Windows system is to select **Start** > **Settings** > **Apps** > **Azure IoT Edge** > **Uninstall** on your IoT Edge device. This method removes Azure IoT Edge from your IoT Edge device, but leaves the connection behind in Windows Admin Center. To complete the removal, uninstall Windows Admin Center from the **Settings** menu as well.
+>Another way to remove Azure IoT Edge from your Windows system is to select **Start** > **Settings** > **Apps** > **Azure IoT Edge LTS** > **Uninstall** on your IoT Edge device. This method removes Azure IoT Edge from your IoT Edge device, but leaves the connection behind in Windows Admin Center. To complete the removal, uninstall Windows Admin Center from the **Settings** menu as well.
## Next steps
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
Title: PowerShell functions for Azure IoT Edge for Linux on Windows | Microsoft
description: Reference information for Azure IoT Edge for Linux on Windows PowerShell functions to deploy, provision, and status IoT Edge for Linux on Windows virtual machines. - Previously updated : 02/16/2021+ Last updated : 06/18/2021
monikerRange: "=iotedge-2018-06"
[!INCLUDE [iot-edge-version-201806](../../includes/iot-edge-version-201806.md)]
-Understand the PowerShell functions that deploy, provision, and get the status your IoT Edge for Linux on Windows virtual machine.
+Understand the PowerShell functions that deploy, provision, and get the status of your IoT Edge for Linux on Windows (EFLOW) virtual machine.
-The commands described in this article are from the `AzureEFLOW.psm1` file, which can be found on your system in your `WindowsPowerShell` directory under `C:\Program Files\WindowsPowerShellModules\AzureEFLOW`.
+## Prerequisites
-## Deploy-Eflow
+The commands described in this article are from the `AzureEFLOW.psm1` file, which can be found on your system in your `WindowsPowerShell` directory under `C:\Program Files\WindowsPowerShell\Modules\AzureEFLOW`.
-The **Deploy-Eflow** command is the main deployment method. The deployment command creates the virtual machine, provisions files, and deploys the IoT Edge agent module. While none of the parameters are required, they can be used to provision your IoT Edge device during the deployment and modify settings for the virtual machine during creation. For a full list, use the command `Get-Help Deploy-Eflow -full`.
+If you don't have the **AzureEflow** folder in your PowerShell directory, use the following steps to download and install Azure IoT Edge for Linux on Windows:
->[!NOTE]
->For the provisioning type, **X509** currently exclusively refers to X509 provisioning using an [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md). The manual X509 provisioning method is not currently supported.
+1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows.
-| Parameter | Accepted values | Comments |
-| | | -- |
-| eflowVhdxDir | Directory path | Directory where deployment stores VHDX file for VM. |
-| provisioningType | **manual**, **TPM**, **X509**, or **symmetric** | Defines the type of provisioning you wish to use for your IoT Edge device. |
-| devConnString | The device connection string of an existing IoT Edge device | Device connection string for manually provisioning an IoT Edge device (**manual**). |
-| symmKey | The primary key for an existing DPS enrollment or the primary key of an existing IoT Edge device registered using symmetric keys | Symmetric key for provisioning an IoT Edge device (**X509** or **symmetric**). |
-| scopeId | The scope ID for an existing DPS instance. | Scope ID for provisioning an IoT Edge device (**X509** or **symmetric**). |
-| registrationId | The registration ID of an existing IoT Edge device | Registration ID for provisioning an IoT Edge device (**X509** or **symmetric**). |
-| identityCertLocVm | Directory path; must be in a folder that can be owned by the `iotedge` service | Absolute destination path of the identity certificate on your virtual machine for provisioning an IoT Edge device (**X509** or **symmetric**). |
-| identityCertLocWin | Directory path | Absolute source path of the identity certificate in Windows for provisioning an IoT Edge device (**X509** or **symmetric**). |
-| identityPkLocVm | Directory path; must be in a folder that can be owned by the `iotedge` service | Absolute destination path of the identity private key on your virtual machine for provisioning an IoT Edge device (**X509** or **symmetric**). |
-| identityPkLocWin | Directory path | Absolute source path of the identity private key in Windows for provisioning an IoT Edge device (**X509** or **symmetric**). |
-| vmSizeDefintion | No longer than 30 characters | Definition of the number of cores and available RAM for the virtual machine. **Default value**: Standard_K8S_v1. |
-| vmDiskSize | Between 8 GB and 256 GB | Maximum disk size of the dynamically expanding virtual hard disk. **Default value**: 16 GB. |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
-| vnetType | **Transparent** or **ICS** | The type of virtual switch. **Default value**: Transparent. Transparent refers to an external switch, while ICS refers to an internal switch. |
-| vnetName | No longer than 64 characters | The name of the virtual switch. **Default value**: External. |
-| enableVtpm | None | **Switch parameter**. Create the virtual machine with TPM enabled or disabled. |
-| mobyPackageVersion | No longer than 30 characters | Version of Moby package to be verified or installed on the virtual machine. **Default value:** 19.03.11. |
-| iotedgePackageVersion | No longer than 30 characters | Version of IoT Edge package to be verified or installed on the virtual machine. **Default value:** 1.1.0. |
-| installPackages | None | **Switch parameter**. When toggled, the function will attempt to install the Moby and IoT Edge packages rather than only verifying the packages are present. |
-
->[!NOTE]
->By default, if the process cannot find an external switch with the name `External`, it will search for any existing external switch through which to obtain an IP address. If there is no external switch available, it will search for an internal switch. If there is no internal switch available, it will attempt to create the default switch through which to obtain an IP address.
+ ```powershell
+ $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
+ $ProgressPreference = 'SilentlyContinue'
+ ΓÇïInvoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath
+ ```
-## Verify-EflowVm
+1. Install IoT Edge for Linux on Windows on your device.
-The **Verify-EflowVm** command is an exposed function to check that the IoT Edge for Linux on Windows virtual machine was created. It takes only common parameters, and it will return **true** if the virtual machine was created and **false** if not. For additional information, use the command `Get-Help Verify-EflowVm -full`.
+ ```powershell
+ Start-Process -Wait msiexec -ArgumentList "/i","$([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))","/qn"
+ ```
-## Provision-EflowVm
+ You can specify custom installation and VHDX directories by adding `INSTALLDIR="<FULLY_QUALIFIED_PATH>"` and `VHDXDIR="<FULLY_QUALIFIED_PATH>"` parameters to the install command.
-The **Provision-EflowVm** command adds the provisioning information for your IoT Edge device to the virtual machine's IoT Edge `config.yaml` file. Provisioning can also be done during the deployment phase by setting parameters in the **Deploy-Eflow** command. For additional information, use the command `Get-Help Provision-EflowVm -full`.
+1. Set the execution policy on the target device to `AllSigned` if it is not already.
-| Parameter | Accepted values | Comments |
-| | | -- |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
-| provisioningType | **manual**, **TPM**, **X509**, or **symmetric** | Defines the type of provisioning you wish to use for your IoT Edge device. |
-| devConnString | The device connection string of an existing IoT Edge device | Device connection string for manually provisioning an IoT Edge device (**manual**). |
-| symmKey | The primary key for an existing DPS enrollment or the primary key of an existing IoT Edge device registered using symmetric keys | Symmetric key for provisioning an IoT Edge device (**DPS**, **symmetric**). |
-| scopeId | The scope ID for an existing DPS instance. | Scope ID for provisioning an IoT Edge device (**DPS**). |
-| registrationId | The registration ID of an existing IoT Edge device | Registration ID for provisioning an IoT Edge device (**DPS**). |
-| identityCertLocVm | Directory path; must be in a folder that can be owned by the `iotedge` service | Absolute destination path of the identity certificate on your virtual machine for provisioning an IoT Edge device (**DPS**, **X509**). |
-| identityCertLocWin | Directory path | Absolute source path of the identity certificate in Windows for provisioning an IoT Edge device (**dps**, **X509**). |
-| identityPkLocVm | Directory path; must be in a folder that can be owned by the `iotedge` service | Absolute destination path of the identity private key on your virtual machine for provisioning an IoT Edge device (**DPS**, **X509**). |
-| identityPkLocWin | Directory path | Absolute source path of the identity private key in Windows for provisioning an IoT Edge device (**dps**, **X509**). |
+ ```powershell
+ Set-ExecutionPolicy -ExecutionPolicy AllSigned -Force
+ ```
-## Get-EflowVmName
+## Connect-EflowVM
-The **Get-EflowVmName** command is used to query the virtual machine's current hostname. This command exists to account for the fact that the Windows hostname can change over time. It takes only common parameters, and it will return the virtual machine's current hostname. For additional information, use the command `Get-Help Get-EflowVmName -full`.
+The **Connect-EflowVM** command connects to the virtual machine using SSH. The only account allowed to SSH to the virtual machine is the user that created it.
-## Get-EflowLogs
+This command only works on a PowerShell session running on the host device. It won't work when using Windows Admin Center or PowerShell ISE.
+
+For more information, use the command `Get-Help Connect-EflowVM -full`.
-The **Get-EflowLogs** command is used to collect and bundle logs from the IoT Edge for Linux on Windows deployment. It outputs the bundled logs in the form of a `.zip` folder. For additional information, use the command `Get-Help Get-EflowLogs -full`.
+## Copy-EflowVmFile
+
+The **Copy-EflowVmFile** command copies file to or from the virtual machine using SCP. Use the optional parameters to specify the source and destination file paths as well as the direction of the copy.
+
+The user **iotedge-user** must have read permission to any origin directories or write permission to any destination directories on the virtual machine.
| Parameter | Accepted values | Comments | | | | -- |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
+| fromFile | String representing path to file | Defines the file to be read from. |
+| toFile | String representing path to file | Defines the file to be written to. |
+| pushFile | None | This flag indicates copy direction. If present, the command pushes the file to the virtual machine. If absent, the command pulls the file from the virtual machine. |
-## Get-EflowVmTpmProvisioningInfo
+For more information, use the command `Get-Help Copy-EflowVMFile -full`.
-The **Get-EflowVmTpmProvisioningInfo** command is used to collect and display the virtual machine's vTPM provisioning information. If the virtual machine was created without vTPM, the command will return that it was unable to find TPM provisioning information. For additional information, use the command `Get-Help Get-EflowVmTpmProvisioningInfo -full`.
+## Deploy-Eflow
+
+The **Deploy-Eflow** command is the main deployment method. The deployment command creates the virtual machine, provisions files, and deploys the IoT Edge agent module. While none of the parameters are required, they can be used to provision your IoT Edge device during the deployment and modify settings for the virtual machine during creation.
| Parameter | Accepted values | Comments | | | | -- |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
+| acceptEula | **Yes** or **No** | A shortcut to accept/deny EULA and bypass the EULA prompt. |
+| acceptOptionalTelemetry | **Yes** or **No** | A shortcut to accept/deny optional telemetry and bypass the telemetry prompt. |
+| cpuCount | Integer value between 1 and the device's CPU cores | Number of CPU cores for the VM.<br><br>**Default value**: 1 vCore. |
+| memoryInMB | Integer value between 1024 and the maximum amount of free memory of the device |Memory allocated for the VM.<br><br>**Default value**: 1024 MB. |
+| vmDiskSize | Between 8 GB and 256 GB | Maximum disk size of the dynamically expanding virtual hard disk.<br><br>**Default value**: 16 GB. |
+| gpuName | GPU Device name | Name of GPU device to be used for passthrough. |
+| gpuPassthroughType | **DirectDeviceAssignment**, **ParaVirtualization**, or none (CPU only) | GPU Passthrough type |
+| gpuCount | Integer value between 1 and the number of the device's GPU cores | Number of GPU devices for the VM. <br><br>**Note**: If using ParaVirtualization, make sure to set gpuCount = 1 |
+
+For more information, use the command `Get-Help Deploy-Eflow -full`.
+
+## Get-EflowLogs
+
+The **Get-EflowLogs** command collects and bundles logs from the IoT Edge for Linux on Windows deployment and installation. It outputs the bundled logs in the form of a `.zip` folder.
+
+For more information, use the command `Get-Help Get-EflowLogs -full`.
-## Get-EflowVmAddr
+## Get-EflowVm
-The **Get-EflowVmAddr** command is used to find and display the virtual machine's IP and MAC addresses. It takes only common parameters. For additional information, use the command `Get-Help Get-EflowVmAddr -full`.
+The **Get-EflowVm** command returns the virtual machine's current configuration. This command takes no parameters. It returns an object that contains four properties:
-## Get-EflowVmSystemInformation
+* VmConfiguration
+* EdgeRuntimeVersion
+* EdgeRuntimeStatus
+* SystemStatistics
-The **Get-EflowVmSystemInformation** command is used to collect and display system information from the virtual machine, such as memory and storage usage. For additional information, use the command `Get-Help Get-EflowVmSystemInformation -full`.
+To view a specific property in a readable list, run the `Get-EflowVM` command with the property expanded. For example:
+
+```powershell
+Get-EflowVM | Select -ExpandProperty VmConfiguration | Format-List
+```
+
+For more information, use the command `Get-Help Get-EflowVm -full`.
+
+## Get-EflowVmFeature
+
+The **Get-EflowVmFeature** command returns the status of the enablement of IoT Edge for Linux on Windows features.
| Parameter | Accepted values | Comments | | | | -- |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
+| feature | **DpsTpm** | Feature name to toggle. |
+
+For more information, use the command `Get-Help Get-EflowVmFeature -full`.
-## Get-EflowVmEdgeInformation
+## Get-EflowVmName
-The **Get-EflowVmEdgeInformation** command is used to collect and display IoT Edge information from the virtual machine, such as the version of IoT Edge the virtual machine is running. For additional information, use the command `Get-Help Get-EflowVmEdgeInformation -full`.
+The **Get-EflowVmName** command returns the virtual machine's current hostname. This command exists to account for the fact that the Windows hostname can change over time. It takes only common parameters.
+
+For more information, use the command `Get-Help Get-EflowVmName -full`.
+
+## Get-EflowVmTelemetryOption
+
+The **Get-EflowVmTelemetryOption** command displays the status of the telemetry (either **Optional** or **Required**) inside the virtual machine.
+
+For more information, use the command `Get-Help Get-EflowVmTelemetryOption -full`.
+
+## Invoke-EflowVmCommand
+
+The **Invoke-EflowVMCommand** command executes a Linux command inside the virtual machine and returns the output. This command only works for Linux commands that return a finite output. It cannot be used for Linux commands that require user interaction or that run indefinitely.
+
+The following optional parameters can be used to specify the command in advance.
| Parameter | Accepted values | Comments | | | | -- |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
+| command | String | Command to be executed in the VM. |
+| ignoreError | None | If this flag is present, ignore errors from the command. |
+
+For more information, use the command `Get-Help Invoke-EflowVmCommand -full`.
-## Get-EflowVmEdgeModuleList
+## Provision-EflowVm
-The **Get-EflowVmEdgeModuleList** command is used to query and display the list of IoT Edge modules running on the virtual machine. For additional information, use the command `Get-Help Get-EflowVmEdgeModuleList -full`.
+The **Provision-EflowVm** command adds the provisioning information for your IoT Edge device to the virtual machine's IoT Edge `config.yaml` file.
| Parameter | Accepted values | Comments | | | | -- |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
+| provisioningType | **ManualConnectionString**, **ManualX509**, **DpsTPM**, **DpsX509**, or **DpsSymmetricKey** | Defines the type of provisioning you wish to use for your IoT Edge device. |
+| devConnString | The device connection string of an existing IoT Edge device | Device connection string for manually provisioning an IoT Edge device (**ManualConnectionString**). |
+| iotHubHostname | The host name of an existing IoT hub | Azure IoT Hub hostname for provisioning an IoT Edge device (**ManualX509**). |
+| deviceId | The device ID of an existing IoT Edge device | Device ID for provisioning an IoT Edge device (**ManualX509**). |
+| scopeId | The scope ID for an existing DPS instance. | Scope ID for provisioning an IoT Edge device (**DpsTPM**, **DpsX509**, or **DpsSymmetricKey**). |
+| symmKey | The primary key for an existing DPS enrollment or the primary key of an existing IoT Edge device registered using symmetric keys | Symmetric key for provisioning an IoT Edge device (**DpsSymmetricKey**). |
+| registrationId | The registration ID of an existing IoT Edge device | Registration ID for provisioning an IoT Edge device (**DpsSymmetricKey**). |
+| identityCertPath | Directory path; must be in a folder that can be owned by the `iotedge` service | Absolute destination path of the identity certificate on your virtual machine for provisioning an IoT Edge device (**ManualX509**, **DpsX509**). |
+| identityPrivKeyPath | Directory path | Absolute source path of the identity private key on your virtual machine for provisioning an IoT Edge device (**ManualX509**, **DpsX509**). |
-## Get-EflowVmEdgeStatus
+For more information, use the command `Get-Help Provision-EflowVm -full`.
-The **Get-EflowVmEdgeStatus** command is used to query and display the status of IoT Edge runtime on the virtual machine. For additional information, use the command `Get-Help Get-EflowVmEdgeStatus -full`.
+## Set-EflowVM
+
+The **Set-EflowVM** command updates the virtual machine configuration with the requested properties. Use the optional parameters to define a specific configuration for the virtual machine.
| Parameter | Accepted values | Comments | | | | -- |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
+| cpuCount | Integer value between 1 and the device's GPU cores | Number of CPU cores for the VM. |
+| memoryInMB | Integer value between 1024 and the maximum amount of free memory of the device | Memory allocated for the VM. |
+| gpuName | GPU device name | Name of the GPU device to be used for passthrough. |
+| gpuPassthroughType | **DirectDeviceAssignment**, **ParaVirtualization**, or none (no passthrough) | GPU Passthrough type |
+| gpuCount | Integer value between 1 and the device's GPU cores | Number of GPU devices for the VM **Note**: Only valid when using DirectDeviceAssignment |
+| headless | None | If this flag is present, it determines whether the user needs to confirm in case a security warning is issued. |
+
+For more information, use the command `Get-Help Set-EflowVM -full`.
-## Get-EflowVmUserName
+## Set-EflowVmFeature
-The **Get-EflowVmUserName** command is used to query and display the virtual machine username that was used to create the virtual machine from the registry. It takes only common parameters. For additional information, use the command `Get-Help Get-EflowVmUserName -full`.
+The **Set-EflowVmFeature** command enables or disables the status of IoT Edge for Linux on Windows features.
-## Get-EflowVmSshKey
+| Parameter | Accepted values | Comments |
+| | | -- |
+| feature | **DpsTpm** | Feature name to toggle. |
+| enable | None | If this flag is present, the command enables the feature. |
-The **Get-EflowVmSshKey** command is used to query and display the SSH key used by the virtual machine. It takes only common parameters. For additional information, use the command `Get-Help Get-EflowVmSshKey -full`.
+For more information, use the command `Get-Help Set-EflowVmFeature -full`.
-## Ssh-EflowVm
+## Set-EflowVmTelemetryOption
-The **Ssh-EflowVm** command is used to SSH into the virtual machine. The only account allowed to SSH to the virtual machine is the user that created it. For additional information, use the command `Get-Help Ssh-EflowVm -full`.
+The **Set-EflowVmTelemetryOption** command enables or disables the optional telemetry inside the virtual machine.
| Parameter | Accepted values | Comments | | | | -- |
-| vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
+| optionalTelemetry | **True** or **False** | Whether optional telemetry is selected. |
+
+For more information, use the command `Get-Help Set-EflowVmTelemetryOption -full`.
+
+## Start-EflowVm
+
+The **Start-EflowVm** command starts the virtual machine. If the virtual machine is already started, no action is taken.
+
+For more information, use the command `Get-Help Start-EflowVm -full`.
+
+## Stop-EflowVm
+
+The **Stop-EflowVm** command stops the virtual machine. If the virtual machine is already stopped, no action is taken.
+
+For more information, use the command `Get-Help Stop-EflowVm -full`.
+
+## Verify-EflowVm
+
+The **Verify-EflowVm** command is an exposed function that checks whether the IoT Edge for Linux on Windows virtual machine was created. It takes only common parameters, and it will return **True** if the virtual machine was created and **False** if not.
+
+For more information, use the command `Get-Help Verify-EflowVm -full`.
## Next steps
-Learn how to use these commands in the following article:
+Learn how to use these commands to install and provision IoT Edge for Linux on Windows in the following article:
-* [Install Azure IoT Edge for Linux on Windows](./how-to-install-iot-edge-windows-on-windows.md)
+* [Install Azure IoT Edge for Linux on Windows](./how-to-install-iot-edge-windows-on-windows.md)
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
Currently, there is no supported way to run IoT Edge version 1.2 on Windows devi
:::moniker range="iotedge-2018-06" Modules built as Linux containers can be deployed to either Linux or Windows devices. For Linux devices, the IoT Edge runtime is installed directly on the host device. For Windows devices, a Linux virtual machine prebuilt with the IoT Edge runtime runs on the host device.
-[IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is currently in public preview, but is the recommended way to run IoT Edge on Windows devices.
+[IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices.
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- |
-| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/tutorial-c-module/green-check.png) | |
-| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/tutorial-c-module/green-check.png) | | Public preview |
-| Windows 10 Pro | Public preview | | |
-| Windows 10 Enterprise | Public preview | | |
-| Windows 10 IoT Enterprise | Public preview | | |
-| Windows Server 2019 | Public preview | | |
+| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | |
+| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | Public preview |
+| Windows 10 Pro | ![Windows 10 Pro + AMD64](./media/support/green-check.png) | | |
+| Windows 10 Enterprise | ![Windows 10 Enterprise + AMD64](./media/support/green-check.png) | | |
+| Windows 10 IoT Enterprise | ![Windows 10 IoT Enterprise + AMD64](./media/support/green-check.png) | | |
+| Windows Server 2019 | ![Windows Server 2019 + AMD64](./media/support/green-check.png) | | |
All Windows operating systems must be version 1809 (build 17763) or later. :::moniker-end
All Windows operating systems must be version 1809 (build 17763) or later.
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- |
-| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/tutorial-c-module/green-check.png) | |
-| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/tutorial-c-module/green-check.png) | | Public preview |
+| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | |
+| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | Public preview |
:::moniker-end <!-- end 1.2 -->
Modules built as Windows containers can be deployed only to Windows devices.
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- |
-| Windows 10 IoT Enterprise | ![check1](./media/tutorial-c-module/green-check.png) | | |
-| Windows Server 2019 | ![check1](./media/tutorial-c-module/green-check.png) | | |
-| Windows Server IoT 2019 | ![check1](./media/tutorial-c-module/green-check.png) | | |
+| Windows 10 IoT Enterprise | ![check1](./media/support/green-check.png) | | |
+| Windows Server 2019 | ![check1](./media/support/green-check.png) | | |
+| Windows Server IoT 2019 | ![check1](./media/support/green-check.png) | | |
All Windows operating systems must be version 1809 (build 17763). The specific build of Windows is required for IoT Edge on Windows because the version of the Windows containers must exactly match the version of the host Windows device. Windows containers currently only use build 17763.
The systems listed in the following table are considered compatible with Azure I
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- |
-| [CentOS-7](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7) | ![CentOS + AMD64](./media/tutorial-c-module/green-check.png) | ![CentOS + ARM32v7](./media/tutorial-c-module/green-check.png) | ![CentOS + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Ubuntu 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFoss64](./media/tutorial-c-module/green-check.png) | ![Ubuntu 20.04 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Ubuntu 20.04 + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Debian 9](https://www.debian.org/releases/stretch/) | ![Debian 9 + AMD64](./media/tutorial-c-module/green-check.png) | ![Debian 9 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Debian 9 + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Debian 10](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/tutorial-c-module/green-check.png) | ![Debian 10 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Debian 10 + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/tutorial-c-module/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/tutorial-c-module/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/tutorial-c-module/green-check.png) |
-| [RHEL 7](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7) | ![RHEL 7 + AMD64](./media/tutorial-c-module/green-check.png) | ![RHEL 7 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![RHEL 7 + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Ubuntu 18.04](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | ![Ubuntu 18.04 + AMD64](./media/tutorial-c-module/green-check.png) | ![Ubuntu 18.04 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Ubuntu 18.04 + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/tutorial-c-module/green-check.png) | | |
-| [Yocto](https://www.yoctoproject.org/) | ![Yocto + AMD64](./media/tutorial-c-module/green-check.png) | ![Yocto + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Yocto + ARM64](./media/tutorial-c-module/green-check.png) |
-| Raspberry Pi OS Buster | | ![Raspberry Pi OS Buster + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/tutorial-c-module/green-check.png) |
+| [CentOS-7](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7) | ![CentOS + AMD64](./media/support/green-check.png) | ![CentOS + ARM32v7](./media/support/green-check.png) | ![CentOS + ARM64](./media/support/green-check.png) |
+| [Ubuntu 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFoss64](./media/support/green-check.png) | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | ![Ubuntu 20.04 + ARM64](./media/support/green-check.png) |
+| [Debian 9](https://www.debian.org/releases/stretch/) | ![Debian 9 + AMD64](./media/support/green-check.png) | ![Debian 9 + ARM32v7](./media/support/green-check.png) | ![Debian 9 + ARM64](./media/support/green-check.png) |
+| [Debian 10](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/support/green-check.png) | ![Debian 10 + ARM32v7](./media/support/green-check.png) | ![Debian 10 + ARM64](./media/support/green-check.png) |
+| [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) |
+| [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/support/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/support/green-check.png) |
+| [RHEL 7](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7) | ![RHEL 7 + AMD64](./media/support/green-check.png) | ![RHEL 7 + ARM32v7](./media/support/green-check.png) | ![RHEL 7 + ARM64](./media/support/green-check.png) |
+| [Ubuntu 18.04](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | ![Ubuntu 18.04 + AMD64](./media/support/green-check.png) | ![Ubuntu 18.04 + ARM32v7](./media/support/green-check.png) | ![Ubuntu 18.04 + ARM64](./media/support/green-check.png) |
+| [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | |
+| [Yocto](https://www.yoctoproject.org/) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) |
+| Raspberry Pi OS Buster | | ![Raspberry Pi OS Buster + ARM32v7](./media/support/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/support/green-check.png) |
<sup>1</sup> The Ubuntu Server 18.04 installation steps in [Install or uninstall Azure IoT Edge for Linux](how-to-install-iot-edge.md) should work without any changes on Ubuntu 20.04.
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
If you haven't already done so, be sure to familiarize yourself with the basic [
$importManifest | Out-File '.\importManifest.json' -Encoding UTF8 ```
- For quick reference, here are some example values for the above parameters. You can also view the complete [import manifest schema](import-schema.md) for more details.
+ The following table is a quick reference for how to populate the above parameters. If you need more information, you can also view the complete [import manifest schema](import-schema.md).
| Parameter | Description | | | -- |
If you haven't already done so, be sure to familiarize yourself with the basic [
| updateName | Identifier for a class of updates. The class can be anything you choose. It will often be a device or model name. | updateVersion | Version number distinguishing this update from others that have the same Provider and Name. Does not have match a version of an individual software component on the device (but can if you choose). | updateType | <ul><li>Specify `microsoft/swupdate:1` for image update</li><li>Specify `microsoft/apt:1` for package update</li></ul>
- | installedCriteria | Used during deployment to compare the version already on the device with the version of the update. installedCriteria must match the version that is on the device, or deploying the update to the device will show as ΓÇ£failedΓÇ¥.<ul><li>For `microsoft/swupdate:1` update type, specify value of SWVersion </li><li>For `microsoft/apt:1` update type, specify **name-version**, where _name_ is the name of the APT Manifest and _version_ is the version of the APT Manifest. For example, contoso-iot-edge-1.0.0.0.
- | updateFilePath(s) | Path to the update file(s) on your computer
+ | installedCriteria | Used during deployment to compare the version already on the device with the version of the update. Deploying the update to the device will return a ΓÇ£failedΓÇ¥ result if the installedCriteria value doesn't match the version that is on the device.<ul><li>For `microsoft/swupdate:1` update type, specify value of SWVersion </li><li>For `microsoft/apt:1` update type, specify **name-version**, where _name_ is the name of the APT Manifest and _version_ is the version of the APT Manifest. For example, contoso-iot-edge-1.0.0.0.
+ | updateFilePath(s) | Path to the update file(s) on your computer.
## Review the generated import manifest
Example:
## Import an update > [!NOTE]
-> The instructions below show how to import an update via the Azure portal UI. You can also use the [Device Update for IoT Hub APIs](/rest/api/deviceupdate/updates) to import an update.
+> The instructions below show how to import an update via the Azure portal UI. You can also use the [Device Update for IoT Hub APIs](#if-youre-importing-via-apis-instead) to import an update instead.
1. Log in to the [Azure portal](https://portal.azure.com) and navigate to your IoT Hub with Device Update.
Example:
:::image type="content" source="media/import-update/import-new-update-2.png" alt-text="Import New Update" lightbox="media/import-update/import-new-update-2.png":::
-5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you created previously using the PowerShell cmdlet. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select your update file(s).
+5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you created previously using the PowerShell cmdlet. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the same update file(s) that you included when you created your import manifest.
:::image type="content" source="media/import-update/select-update-files.png" alt-text="Select Update Files" lightbox="media/import-update/select-update-files.png":::
Example:
:::image type="content" source="media/import-update/update-ready.png" alt-text="Job Status" lightbox="media/import-update/update-ready.png":::
+## If you're importing via APIs instead
+
+If you've just finished using the steps above to import via the Azure portal, skip to Next Steps below.
+
+If you want to use the [Device Update for IoT Hub Update APIs](/rest/api/deviceupdate/updates) to import an update instead of importing via the Azure portal, note the following:
+ - You will need to upload your update file(s) to an Azure Blob Storage location before you call the Update APIs.
+ - You can reference this [sample API call](import-schema.md#example-import-request-body) which uses the import manifest you created above.
++ ## Next Steps [Create Groups](create-update-group.md)
iot-hub Iot Hub Dev Guide Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-dev-guide-azure-ad-rbac.md
Previously updated : 04/21/2021 Last updated : 06/23/2021
The following tables describe the permissions available for IoT Hub service API
| Microsoft.Devices/IotHubs/cloudToDeviceMessages/send/action | Send cloud-to-device message to any device | | Microsoft.Devices/IotHubs/cloudToDeviceMessages/feedback/action | Receive, complete, or abandon cloud-to-device message feedback notification | | Microsoft.Devices/IotHubs/cloudToDeviceMessages/queue/purge/action | Deletes all the pending commands for a device |
-| Microsoft.Devices/IotHubs/directMethods/invoke/action | Invokes a direct method on a device |
+| Microsoft.Devices/IotHubs/directMethods/invoke/action | Invokes a direct method on any device or module |
| Microsoft.Devices/IotHubs/fileUpload/notifications/action | Receive, complete, or abandon file upload notifications | | Microsoft.Devices/IotHubs/statistics/read | Read device and service statistics | | Microsoft.Devices/IotHubs/configurations/read | Read device management configurations |
The [the built-in endpoint](iot-hub-devguide-messages-read-builtin.md) doesn't s
## Next steps - For more information on the advantages of using Azure AD in your application, see [Integrating with Azure Active Directory](../active-directory/develop/active-directory-how-to-integrate.md).-- For more information on requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md).
+- For more information on requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md).
key-vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/network-security.md
This section will cover the different ways that the Azure Key Vault firewall can
### Key Vault Firewall Disabled (Default)
-By default, when you create a new key vault, the Azure Key Vault firewall is disabled. All applications and Azure services can access the key vault and send requests to the key vault. Note, this configuration does not mean that any user will be able to perform operations on your key vault. The key vault still restricts to secrets, keys, and certificates stored in key vault by requiring Azure Active Directory authentication and access policy permissions. To understand key vault authentication in more detail see the key vault authentication fundamentals document [here](/azure/key-vault/general/authentication.md). For more information, see [Access Azure Key Vault behind a firewall](./access-behind-firewall.md).
+By default, when you create a new key vault, the Azure Key Vault firewall is disabled. All applications and Azure services can access the key vault and send requests to the key vault. Note, this configuration does not mean that any user will be able to perform operations on your key vault. The key vault still restricts to secrets, keys, and certificates stored in key vault by requiring Azure Active Directory authentication and access policy permissions. To understand key vault authentication in more detail see the key vault authentication fundamentals document [here](/azure/key-vault/general/authentication). For more information, see [Access Azure Key Vault behind a firewall](./access-behind-firewall.md).
### Key Vault Firewall Enabled (Trusted Services Only)
To understand how to configure a private link connection on your key vault, plea
## Next steps * [Virtual network service endpoints for Key Vault](overview-vnet-service-endpoints.md)
-* [Azure Key Vault security overview](security-features.md)
+* [Azure Key Vault security overview](security-features.md)
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/security-features.md
When you create a key vault in an Azure subscription, it's automatically associa
In all types of access, the application authenticates with Azure AD. The application uses any [supported authentication method](../../active-directory/develop/authentication-vs-authorization.md) based on the application type. The application acquires a token for a resource in the plane to grant access. The resource is an endpoint in the management or data plane, based on the Azure environment. The application uses the token and sends a REST API request to Key Vault. To learn more, review the [whole authentication flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
-For full details, see [Key Vault Authentication Fundamentals](/azure/key-vault/general/authentication.md)
+For full details, see [Key Vault Authentication Fundamentals](/azure/key-vault/general/authentication)
## Key Vault authentication options
You should also take regular back ups of your vault on update/delete/create of o
- [Azure Key Vault security baseline](security-baseline.md) - [Azure Key Vault best practices](security-baseline.md) - [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md)-- [Azure RBAC: Built-in roles](../../role-based-access-control/built-in-roles.md)
+- [Azure RBAC: Built-in roles](../../role-based-access-control/built-in-roles.md)
key-vault Third Party Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/third-party-solutions.md
editor: ''
Previously updated : 06/21/2021 Last updated : 06/23/2021
Several vendors have worked closely with Microsoft to integrate their solutions
|-|-| |[Cloudflare](https://cloudflare.com)|CloudflareΓÇÖs Keyless SSL enables your websites to use CloudflareΓÇÖs SSL service while keeping custody of their private keys in Managed HSM. This service, coupled with Managed HSM helps a high level of protection by safeguarding your private keys, performing signing and encryption operations internally, providing access controls, and storing keys in a tamper-resistant FIPS 140-2 Level 3 HSM. <br>[Documentation](https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/azure-managed-hsm) |[NewNet Communication Technologies](https://newnet.com/)|NewNetΓÇÖs Secure Transaction Cloud(STC) is an Industry first Cloud based secure payment routing, switching, transport solution augmented with Cloud based virtualized HSM, handling Mobile, Web, In-Store payments. STC enables cloud transformation for payment entities & rapid deployment for green field payment providers.<br/>[Azure Marketplace offering](https://azuremarketplace.microsoft.com/marketplace/apps/newnetcommunicationtechnologies1589991852134.secure_transaction_cloud?tab=overview)<br/>[Documentation](https://newnet.com/business-units/secure-transactions/products/secure-transaction-cloud-stc/)|
-|[PrimeKey](https://www.primekey.com)|EJBCA Enterprise, world's most use PKI (public key infrastructure), provides with the basic security services for trusted identities and secure communication for any use case. A single instance of EJBCA Enterprise supports multiple CAs and levels to enable you to build complete infrastructure(s) for multiple use cases.<br>[Azure Marketplace offering](https://azuremarketplace.microsoft.com/marketplace/apps/primekey.ejbca_enterprise_cloud_2)<br/>[Documentation]()|
+|[PrimeKey](https://www.primekey.com)|EJBCA Enterprise, world's most used PKI (public key infrastructure), provides the basic security services for trusted identities and secure communication for any use case. A single instance of EJBCA Enterprise supports multiple CAs and levels to enable you to build complete infrastructure(s) for multiple use cases.<br>[Azure Marketplace offering](https://azuremarketplace.microsoft.com/marketplace/apps/primekey.ejbca_enterprise_cloud_2)<br/>[Documentation](https://doc.primekey.com/x/a4z_/)|
lab-services Upload Custom Image Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/upload-custom-image-shared-image-gallery.md
You will need permission to create an Azure VM in your school's Azure subscripti
1. Install software and make any necessary configuration changes to the Azure VM's image.
-1. Run [SysPrep](../virtual-machines/windows/capture-image-resource.md#generalize-the-windows-vm-using-sysprep) if you need to create a generalized image. For more information, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
+1. Run [SysPrep](../virtual-machines/generalize.md#windows) if you need to create a generalized image. For more information, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
1. In shared image gallery, [create an image definition](../virtual-machines/windows/shared-images-portal.md#create-an-image-definition) or choose an existing image definition. - Choose **Gen 1** for the **VM generation**.
lighthouse Tenants Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/tenants-users-roles.md
Title: Tenants, users, and roles in Azure Lighthouse scenarios description: Understand how Azure Active Directory tenants, users, and roles can be used in Azure Lighthouse scenarios. Previously updated : 05/11/2021 Last updated : 06/23/2021
All [built-in roles](../../role-based-access-control/built-in-roles.md) are curr
> [!NOTE] > Once a new applicable built-in role is added to Azure, it can be assigned when [onboarding a customer using Azure Resource Manager templates](../how-to/onboard-customer.md). There may be a delay before the newly-added role becomes available in Partner Center when [publishing a managed service offer](../how-to/publish-managed-services-offers.md).
+## Transferring delegated subscriptions between Azure AD tenants
+
+If a subscription is [transferred to another Azure AD tenant account](../../cost-management-billing/manage/billing-subscription-transfer.md#transfer-a-subscription-to-another-azure-ad-tenant-account), the [registration definition and registration assignment resources](architecture.md#delegation-resources-created-in-the-customer-tenant) created through the [Azure Lighthouse onboarding process](../how-to/onboard-customer.md) will be preserved. This means that access granted through Azure Lighthouse to managing tenants will remain in effect for that subscription (or for delegated resource groups within that subscription).
+
+The only exception is if the subscription is transferred to an Azure AD tenant to which it had been previously delegated. In this case, the delegation resources for that tenant are removed and the access granted through Azure Lighthouse will no longer apply, since the subscription now belongs directly to that tenant (rather than being delegated to it through Azure Lighthouse). However, if that subscription had also been delegated to other managing tenants, those other managing tenants will retain the same access to the subscription.
+ ## Next steps - Learn about [recommended security practices for Azure Lighthouse](recommended-security-practices.md).
load-balancer Update Load Balancer With Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/update-load-balancer-with-vm-scale-set.md
The new inbound NAT pool should not have an overlapping front-end port range wit
--add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools "{'id':'/subscriptions/mySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLb/inboundNatPools/MyNatPool'}" az vmss update-instances
- -–instance-ids *
+ --instance-ids *
--resource-group MyResourceGroup --name MyVMSS ```
To delete the NAT pool, first remove it from the scale set. A full example using
--name MyVMSS az network lb inbound-nat-pool delete --resource-group MyResourceGroup
- -–lb-name MyLoadBalancer
+ --lb-name MyLoadBalancer
--name MyNatPool ```
Make sure to create separate inbound NAT pools with non-overlapping frontend por
--add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools "{'id':'/subscriptions/mySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLb/inboundNatPools/MyNatPool'}" az vmss update-instances
- -–instance-ids *
+ --instance-ids *
--resource-group MyResourceGroup --name MyVMSS
Make sure to create separate inbound NAT pools with non-overlapping frontend por
--add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools "{'id':'/subscriptions/mySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLb/inboundNatPools/MyNatPool2'}" az vmss update-instances
- -–instance-ids *
+ --instance-ids *
--resource-group MyResourceGroup --name MyVMSS2 ```
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-managed-service-identity.md
As a specific example, suppose that you want to run the [Snapshot Blob operation
> [!IMPORTANT] > To access Azure storage accounts behind firewalls by using HTTP requests and managed identities,
-> make sure that you also set up your storage account with the [exception that allows access by trusted Microsoft services](../connectors/connectors-create-api-azureblobstorage.md#access-trusted-service).
+> make sure that you also set up your storage account with the [exception that allows access by trusted Microsoft services](../connectors/connectors-create-api-azureblobstorage.md#access-blob-storage-with-managed-identities).
To run the [Snapshot Blob operation](/rest/api/storageservices/snapshot-blob), the HTTP action specifies these properties:
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 05/25/2021 Last updated : 06/23/2021 # Limits and configuration reference for Azure Logic Apps
The following tables list the values for a single workflow definition:
| `description` - Maximum length | 256 characters || | `parameters` - Maximum number of items | 50 parameters || | `outputs` - Maximum number items | 10 outputs ||
-| `trackedProperties` - Maximum size | 16,000 characters ||
+| `trackedProperties` - Maximum size | 8,000 characters ||
|||| <a name="run-duration-retention-limits"></a>
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
See the following table to learn more about supported series and restrictions.
| [NDv2](../virtual-machines/ndv2-series.md) | Requires approval. | GPU | Compute clusters and instance | | [NV](../virtual-machines/nv-series.md) | None. | GPU | Compute clusters and instance | | [NVv3](../virtual-machines/nvv3-series.md) | Requires approval. | GPU | Compute clusters and instance |
-| [NCT4_v3](../virtual-machines/nct4-v3-series.md) | Requires approval. | GPU | Compute clusters and instance |
-| [NDA100_v4](../virtual-machines/nda100-v4-series.md) | Requires approval. | GPU | Compute clusters and instance |
+| [NCasT4_v3](../virtual-machines/nct4-v3-series.md) | Requires approval. | GPU | Compute clusters and instance |
+| [NDasrA100_v4](../virtual-machines/nda100-v4-series.md) | Requires approval. | GPU | Compute clusters and instance |
While Azure Machine Learning supports these VM series, they might not be available in all Azure regions. To check whether VM series are available, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
For information on Azure Virtual Machines, see the [Virtual Machines documentati
In some situations, you may want to allow someone to connect to your secured workspace over a public endpoint, instead of through the VNet. After configuring a workspace with a private endpoint, you can optionally enable public access to the workspace. Doing so does not remove the private endpoint. All communications between components behind the VNet is still secured. It enables public access only to the workspace, in addition to the private access through the VNet. > [!WARNING]
-> When connecting over the public endpoint, some features of studio will fail to access your data. This problem happens when the data is stored on a service that is secured behind the VNet. For example, an Azure Storage Account. Also please note compute instance Jupyter/JupyterLab/RStudio functionality and running notebooks will not work.
+> When connecting over the public endpoint, some features of studio will fail to access your data. This problem happens when the data is stored on a service that is secured behind the VNet. For example, an Azure Storage Account. Also please note compute instance Jupyter/JupyterLab/RStudio functionality and running notebooks is not supported.
To enable public access to a private endpoint-enabled workspace, use the following steps:
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
Logs from the setup script execution appear in the logs folder in the compute in
## Manage
-Start, stop, restart, and delete a compute instance. A compute instance does not automatically scale down, so make sure to stop the resource to prevent ongoing charges. Stopping a compute instance deallocates it. Then start it again when you need it. While stopping the compute instance stops the billing for compute hours, you will still be billed for disk, public IP, and standard load balancer.
+Start, stop, restart, and delete a compute instance. A compute instance does not automatically scale down, so make sure to stop the resource to prevent ongoing charges. Stopping a compute instance deallocates it. Then start it again when you need it. While stopping the compute instance stops the billing for compute hours, you will still be billed for disk, public IP, and standard load balancer.
> [!TIP]
-> The compute instance has 120GB OS disk. If you run out of disk space, [use the terminal](how-to-access-terminal.md) to clear at least 1-2 GB before you stop or restart the compute instance.
+> The compute instance has 120GB OS disk. If you run out of disk space, [use the terminal](how-to-access-terminal.md) to clear at least 1-2 GB before you stop or restart the compute instance. Please do not stop the compute instance by issuing sudo shutdown from the terminal.
# [Python](#tab/python)
You can perform the following actions:
For each compute instance in your workspace that you created (or that was created for you), you can: * Access Jupyter, JupyterLab, RStudio on the compute instance
-* SSH into compute instance. SSH access is disabled by default but can be enabled at compute instance creation time. SSH access is through public/private key mechanism. The tab will give you details for SSH connection such as IP address, username, and port number.
+* SSH into compute instance. SSH access is disabled by default but can be enabled at compute instance creation time. SSH access is through public/private key mechanism. The tab will give you details for SSH connection such as IP address, username, and port number. In case of virtual network deployment, disabling SSH prevents SSH access from public internet, you can still SSH from within virtual network using private IP address of compute instance node and port 22.
* Get details about a specific compute instance such as IP address, and region.
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-high-availability-machine-learning.md
Runs in Azure Machine Learning are defined by a run specification. This specific
> Pipelines created in studio designer cannot currently be exported as code. * Manage configurations as code.+ * Avoid hardcoded references to the workspace. Instead, configure a reference to the workspace instance using a [config file](how-to-configure-environment.md#workspace) and use [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#remarks) to initialize the workspace. To automate the process, use the [Azure CLI extension for machine learning](reference-azure-machine-learning-cli.md) command [az ml folder attach](/cli/azure/ml(v1)/folder#ext_azure_cli_ml_az_ml_folder_attach). * Use run submission helpers such as [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) and [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline(class)). * Use [Environments.save_to_directory()](/python/api/azureml-core/azureml.core.environment(class)#save-to-directory-path--overwrite-false-) to save your environment definitions.
The following artifacts can be exported and imported between workspaces by using
> [!TIP] > * __Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure ML, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ext_azure_cli_ml_az_ml_dataset_register) to register a dataset.
->
> * __Run outputs__ are stored in the default storage account associated with a workspace. While run outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md). ## Next steps
marketplace Azure Vm Create Certification Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-certification-faq.md
Provisioning issues can include the following failure scenarios:
> [!NOTE] > For more information about VM generalization, see: > - [Linux documentation](azure-vm-create-using-approved-base.md#generalize-the-image)
-> - [Windows documentation](../virtual-machines/windows/capture-image-resource.md#generalize-the-windows-vm-using-sysprep)
+> - [Windows documentation](../virtual-machines/generalize.md#windows)
## VHD specifications
marketplace Azure Vm Create Using Approved Base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-using-approved-base.md
Refer to the following documentation to connect to your [Windows](../virtual-mac
## Next steps - Recommended next step: [Test your VM image](azure-vm-image-test.md) to ensure it meets Azure Marketplace publishing requirements. This is optional.-- If you don't want to test your VM image, sign in to [Partner Center](https://partner.microsoft.com/) to publish your image.
+- If you don't want to test your VM image, sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165935) to publish your image.
- If you encountered difficulty creating your new Azure-based VHD, see [VM FAQ for Azure Marketplace](azure-vm-create-faq.md).
marketplace Azure Vm Create Using Own Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-using-own-image.md
Previously updated : 06/02/2021 Last updated : 06/23/2021 # Create a virtual machine using your own image
Register-AzResourceProvider -ProviderNamespace Microsoft.PartnerCenterIngestion
## Next steps - [Test your VM image](azure-vm-image-test.md) to ensure it meets Azure Marketplace publishing requirements (optional).-- If you don't want to test your VM image, sign in to [Partner Center](https://partner.microsoft.com/) and publish the SIG Image.
+- If you don't want to test your VM image, sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165935) and publish the SIG Image.
- If you encountered difficulty creating your new Azure-based VHD, see [VM FAQ for Azure Marketplace](azure-vm-create-faq.md).
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-get-sas-uri.md
Previously updated : 04/21/2021 Last updated : 06/23/2021
Check the SAS URI before publishing it on Partner Center to avoid any issues rel
## Next steps - If you run into issues, see [VM SAS failure messages](azure-vm-sas-failure-messages.md)-- [Sign in to Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership)
+- [Sign in to Partner Center](https://go.microsoft.com/fwlink/?linkid=2165935)
- [Create a virtual machine offer on Azure Marketplace](azure-vm-create.md)
marketplace Cloud Partner Portal Migration Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/cloud-partner-portal-migration-faq.md
description: Answers to commonly asked questions about transitioning offers from
--++ Last updated 07/14/2020
You can continue doing business in Partner Center:
| Offer publishing and offer management experience | We've moved your offer data from the Cloud Partner Portal to Partner Center. You will now access your offers in Partner Center, which offers an improved user experience and intuitive interface. Learn how to [Update an existing offer in the commercial marketplace](update-existing-offer.md). | | Availability of your offers in the commercial marketplace | No changes. If your offer is live in the commercial marketplace, it will continue to be live. | | New purchases and deployments | No changes. Your customers can continue purchasing and deploying your offers with no interruptions. |
-| Payouts | Any purchases and deployments will continue to be paid out to you as normal. Learn more about [Getting paid in the commercial marketplace](/partner-center/marketplace-get-paid?context=/azure/marketplace/context/context). |
-| API integrations with existing [Cloud Partner Portal APIs](cloud-partner-portal-api-overview.md) | Existing Cloud Partner Portal APIs are still supported and your existing integrations still work. Learn more at [Will the Cloud Partner Portal REST APIs be supported?](#are-the-cloud-partner-portal-rest-apis-still-supported) |
+| Payouts | Any purchases and deployments will continue to be paid out to you as normal. Learn more about [Getting paid in Partner Center](/partner-center/marketplace-get-paid?context=/azure/marketplace/context/context). |
+| API integrations with existing [Cloud Partner Portal APIs](cloud-partner-portal-api-overview.md) | Existing Cloud Partner Portal APIs are still supported and your existing integrations still work. Learn more at [Are the Cloud Partner Portal REST APIs still supported?](#are-the-cloud-partner-portal-rest-apis-still-supported) |
| Analytics | You can continue to monitor sales, evaluate performance, and optimize your offers in the commercial marketplace by viewing analytics in Partner Center. There are differences between how analytics reports display in CPP and Partner Center. For example, **Seller Insights** in CPP has an **Orders & Usage** tab that displays data for usage-based offers and non-usage-based offers, while in Partner Center the **Orders** page has a separate tab for SaaS Offers. Learn more at [Access analytic reports for the commercial marketplace in Partner Center](analytics.md). | |||
For the offer types supported in Partner Center, all offers were moved regardles
| SaaS | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md). | | Virtual Machine | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Plan a virtual machine offer](marketplace-virtual-machines.md). | | Azure application | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create an Azure application offer](azure-app-offer-setup.md). |
-| Dynamics 365 Business Central | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Dynamics 365 Business Central offer](dynamics-365-business-central-offer-setup.md). |
+| Dynamics 365 Business Central | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Dynamics 365 for Business Central offer](dynamics-365-business-central-offer-setup.md). |
| Dynamics 365 for Customer Engagement & PowerApps | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Dynamics 365 for Customer Engagement & PowerApps offer](dynamics-365-customer-engage-offer-setup.md). | | Dynamics 365 for Operations | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Dynamics 365 for Operations offer](./dynamics-365-operations-offer-setup.md). |
-| Power BI App | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Power BI app for AppSource](./power-bi-app-offer-setup.md). |
-| IoT Edge module | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create, configure, and publish an IoT Edge module offer in Azure Marketplace](iot-edge-offer-setup.md). |
+| Power BI App | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Power BI app offer](./power-bi-app-offer-setup.md). |
+| IoT Edge module | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create an IoT Edge module offer](iot-edge-offer-setup.md). |
| Container | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create an Azure container offer](./azure-container-offer-setup.md). | | Consulting Service | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a consulting service offer](./create-consulting-service-offer.md). |
-| Managed Service | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Managed Service offer](./plan-managed-service-offer.md). |
-| Dynamics Nav Managed Service | No | Microsoft has evolved Dynamics NAV Managed Service into [Dynamics 365 Business Central](/dynamics365/business-central/), so we de-listed Dynamics NAV Managed Service live offers from AppSource. These offers are no longer discoverable by customers and haven't been moved to Partner Center. To make your offers available in AppSource, adapt them to Dynamics 365 Business Central offers and submit them in [Partner Center](https://partner.microsoft.com/). Learn more at [Create a Dynamics 365 Business Central offer](dynamics-365-business-central-offer-setup.md). |
-| Cortana Intelligence | No | Microsoft has evolved the product road map for Cortana Intelligence, so we de-listed Cortana Intelligence live offers from AppSource. These offers are no longer discoverable by customers and haven't been moved to Partner Center. To make your offers available in the commercial marketplace, adapt your offers to Software as a Service (SaaS) offers and submit them in [Partner Center](https://partner.microsoft.com/). Learn more at [SaaS offer creation checklist in Partner Center](./plan-saas-offer.md). |
+| Managed Service | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Plan a Managed Service offer for the Microsoft commercial marketplace](./plan-managed-service-offer.md). |
+| Dynamics Nav Managed Service | No | Microsoft has evolved Dynamics NAV Managed Service into [Dynamics 365 Business Central](/dynamics365/business-central/), so we de-listed Dynamics NAV Managed Service live offers from AppSource. These offers are no longer discoverable by customers and haven't been moved to Partner Center. To make your offers available in AppSource, adapt them to Dynamics 365 Business Central offers and submit them in [Partner Center](https://partner.microsoft.com/). Learn more at [Create a Dynamics 365 for Business Central offer](dynamics-365-business-central-offer-setup.md). |
+| Cortana Intelligence | No | Microsoft has evolved the product road map for Cortana Intelligence, so we de-listed Cortana Intelligence live offers from AppSource. These offers are no longer discoverable by customers and haven't been moved to Partner Center. To make your offers available in the commercial marketplace, adapt your offers to Software as a Service (SaaS) offers and submit them in [Partner Center](https://partner.microsoft.com/). Learn more at [How to plan a SaaS offer for the commercial marketplace](./plan-saas-offer.md). |
## I can't find my Cloud Partner Portal offers in Partner Center
If you can't sign in to your account, you can open a [support ticket](https://go
## Where are instructions for using Partner Center?
-Go to the [commercial marketplace documentation.](index.yml), then expand **Commercial Marketplace Portal in Partner Center**. To see help articles for creating offers in Partner Center, expand **Create a new offer**.
+Go to the [commercial marketplace documentation.](index.yml). To see help articles for creating offers in Partner Center, expand **Create and manage offers**.
## What are the publishing and offer management differences?
Here are some differences between the Cloud Partner Portal and Partner Center.
### Modular publishing capabilities
-Partner Center provides a modular publishing option that lets you select the changes you want to publish instead of always publishing all updates at once. For example, the following screen shows that the only changes selected to be published are the changes to **Properties** and **Offer Listing**. The changes you make in the Preview page will not be published.
+Partner Center provides a modular publishing option that lets you select the changes you want to publish instead of always publishing all updates at once. For example, the following screen shows that the only changes selected to be published are the changes to **Properties** and **Offer Listing**. The changes you make on the Preview page will not be published.
[![Screenshot shows the Partner Center Review and publish page.](media/cpp-pc-faq/review-page.png "Shows the Partner Center Review and publish page")](media/cpp-pc-faq/review-page.png#lightbox)
Partner Center includes a [compare feature](update-existing-offer.md#compare-cha
### Branding and navigation changes
-You'll notice some branding changes. For example, *SKUs* are branded as *Plans* in Partner Center:
+You'll notice some branding changes. For example, *SKUs* are branded as *Plans* in Partner Center.
[![Screenshot shows the Partner Center Plans page.](media/cpp-pc-faq/plans.png "Shows the Partner Center Plans page")](media/cpp-pc-faq/plans.png#lightbox)
marketplace Gtm How To Get Featured https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/gtm-how-to-get-featured.md
You can take the following action items to improve your score:
Featured Apps promotions operate separately from the search algorithm. >[!Note]
->If your solution is not appearing correctly in search results, file a support ticket through the Help menu in [Partner Center](https://partner.microsoft.com/).
+>If your solution is not appearing correctly in search results, file a **[support ticket](https://go.microsoft.com/fwlink/?linkid=2165533)** in Partner Center.
Your GTM support also includes a full library of self-help templates, web content, training, and tools to help you further promote your listings and your business.
marketplace Gtm Marketing Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/gtm-marketing-best-practices.md
Previously updated : 04/16/2020 Last updated : 06/23/2021 # Marketing best practices
marketplace Integrated Solutions For Publishers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/integrated-solutions-for-publishers.md
description: Learn requirements and steps for publishing integrated solutions to
-- Previously updated : 04/16/2020++ Last updated : 06/22/2021 # Publish an integrated solution
After your nomination is approved, use the linked program resources to develop y
* APIs * Unified data repository
- Use these resources for the development process:
+2. Identify a cross-partner project manager to drive the project plan and timeline that you developed in the business and technical workshops.
- * [Business decision workshop discussion guide](https://aka.ms/AA5qicx)
- * [Technical decision workshop discussion guide](https://aka.ms/AA5qid1)
- * [Quickstart video: Integrated Solutions workshops](https://partner.microsoft.com/asset/detail/integrated-solutions-workshop-quickstart-guide-mp4)
+3. Develop the complete technical integration of the solution.
-1. Identify a cross-partner project manager to drive the project plan and timeline that you developed in the business and technical workshops.
+4. Decide the solution pricing and a single price point to surface on Microsoft AppSource or Azure Marketplace.
-1. Develop the complete technical integration of the solution.
-
-1. Decide the solution pricing and a single price point to surface on Microsoft AppSource or Azure Marketplace.
-
-1. Complete the marketing collateral for the Microsoft AppSource or Azure Marketplace listing, including:
+5. Complete the marketing collateral for the Microsoft AppSource or Azure Marketplace listing, including:
* A combined solution name. * A listing description of the integrated solution. Follow [offer-listing best practices](./gtm-offer-listing-best-practices.md).
After your nomination is approved, use the linked program resources to develop y
## Publish your integrated solution
-After you finish the technical integration and the marketing collateral, refer to the publisher guide for [Consulting services for Microsoft AppSource and Azure Marketplace](./plan-consulting-service-offer.md). Use this resource to determine whether your solution will be published in Microsoft AppSource or Azure Marketplace. Also use the guide to prepare your publishing artifacts and complete the publishing process.
+After you finish the technical integration and the marketing collateral, refer to the publisher guide for [Consulting services for Microsoft AppSource and Azure Marketplace](./plan-consulting-service-offer.md) to determine whether your solution will be published in Microsoft AppSource or Azure Marketplace. We recommend using the guide to prepare your publishing artifacts and complete the publishing process.
Although five service types are available for consulting-service offers, an integrated solution must be either a proof of concept or a full implementation.
When your solution is live in Microsoft AppSource or Azure Marketplace, you'll w
## Next steps -- [Integrated solutions nomination form](https://aka.ms/AA5qicu)
+- [Integrated solutions nomination form](https://aka.ms/AA5qicu)
marketplace License Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/license-dashboard.md
The _License Distribution_ widget shows the distribution of licenses across diff
:::image type="content" source="./media/license-dashboard/license-distribution.png" alt-text="Screenshot of the License Distribution widget on the License dashboard in Partner Center.":::
+## Data terms in License report downloads
+
+You can use the download icon in the upper-right corner of any widget to download the data.
+
+| Attribute name | Definition |
+| | - |
+| Customer Country | CustomerΓÇÖs billing country |
+| Customer Country Code | CustomerΓÇÖs billing country code |
+| Customer Name | Customer name |
+| Activation Date | Date when licenses were activated |
+| Product Display Name | Offer title as shown in AppSource |
+| Product ID | Product ID |
+| Licenses Provisioned | Number of licenses activated into the customerΓÇÖs account |
+| Licenses Assigned | Number of licenses customer has assigned to their users |
+| SKU Name | Name of the plan in the offer |
+| Tenant ID | Unique ID of the tenant |
+| License State | License state |
+| Service ID | Unique identifier used in the package to map the plan with the license checks |
+|||
+ ## Next steps - For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
marketplace Purchase Software On Appsource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/purchase-software-on-appsource.md
# This file is currently not connected to TOC. We are working on a new location Sept 2020 Title: How to purchase software on Microsoft AppSource
-description: Steps for one to purchase software on AppSource | Azure Marketplace.
+ Title: Purchase software on Microsoft AppSource
+description: Steps for purchasing software on Microsoft AppSource (Azure Marketplace).
Previously updated : 03/20/2020 Last updated : 06/23/2021
-# How to Purchase Software on Microsoft AppSource
+# Purchase Software on Microsoft AppSource
Microsoft [AppSource](https://appsource.microsoft.com/) now enables customers to subscribe to SaaS applications that are offered by Microsoft partners. Customers can find certified web applications on the store and can manage the charges, upgrades, downgrades, and cancellations in a single place using Microsoft's Admin Center. This article describes how you can purchase an app from the store.
To purchase SaaS offers, you need:
1. Select **Place order**.
-## How to configure software post-purchase
+## Configure software post-purchase
After your order is received, it can take several seconds to get confirmed. You will receive a link to configure your SaaS subscription on the page, as well as an email confirming the purchase and the link to complete the configuration.
After your order is received, it can take several seconds to get confirmed. You
## Contact support
-One can [submit a support ticket](https://admin.microsoft.com/Adminportal/Home?source=applauncher#/homepage) through the Microsoft 365 admin center.
+For support, submit a [support ticket](https://admin.microsoft.com/Adminportal/Home?source=applauncher#/homepage) through the Microsoft 365 admin center.
-For business products, [contact help here](/office365/admin/contact-support-for-business-products?tabs=phone).
+For business products, [contact help](/office365/admin/contact-support-for-business-products?tabs=phone).
## Next steps
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
To enable public network access for the Azure Migrate project, sign in to the Az
![Screenshot that shows how to change the network access mode.](./media/how-to-use-azure-migrate-with-private-endpoints/migration-project-properties.png)
-### Other considerations
+### Other considerations
**Considerations** | **Details** |
This section describes how to set up the Azure Migrate appliance. Then you'll us
- After the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix *privatelink*. By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type and inserts DNS A records for the associated private endpoints. This action enables the Azure Migrate appliance and other software components that reside in the source network to reach the Azure Migrate resource endpoints on private IP addresses. - Azure Migrate also enables a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the migrate project and grants permissions to the managed identity to securely access the storage account.
-1. After the key is successfully generated, copy the key details to configure and register the appliance.
+1. After the key is successfully generated, copy the key details to configure and register the appliance.
#### Download the appliance installer file
Azure Migrate: Discovery and assessment use a lightweight Azure Migrate applianc
To set up the appliance: 1. Download the zipped file that contains the installer script from the portal.
- 1. Copy the zipped file on the server that will host the appliance.
+ 1. Copy the zipped file on the server that will host the appliance.
1. After you download the zipped file, verify the file security. 1. Run the installer script to deploy the appliance.
Open a browser on any machine that can connect to the appliance server. Open the
#### Set up prerequisites
-1. Read the third-party information, and accept the **license terms**.
+1. Read the third-party information, and accept the **license terms**.
1. In the configuration manager under **Set up prerequisites**, do the following: - **Connectivity**: The appliance checks for access to the required URLs. If the server uses a proxy:
After the prerequisites check has completed, follow the steps to register the ap
### Assess your servers for migration to Azure After the discovery is complete, assess your servers, such as [VMware VMs](./tutorial-assess-vmware-azure-vm.md), [Hyper-V VMs](./tutorial-assess-hyper-v.md), [physical servers](./tutorial-assess-vmware-azure-vm.md), [AWS VMs](./tutorial-assess-aws.md), and [GCP VMs](./tutorial-assess-gcp.md), for migration to Azure VMs or Azure VMware Solution by using the Azure Migrate: Discovery and assessment tool.
-You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and assessment tool by using an imported CSV file.
+You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and assessment tool by using an imported CSV file.
## Migrate servers to Azure by using Private Link
After you set up the replication appliance, follow these steps to create the req
1. In **Discover machines** > **Are your machines virtualized?**, select **Not virtualized/Other**. 1. In **Target region**, select and confirm the Azure region to which you want to migrate the machines.
-1. Select **Create resources** to create the required Azure resources. Don't close the page during the creation of resources.
+1. Select **Create resources** to create the required Azure resources. Don't close the page during the creation of resources.
- This step creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
- - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
+ - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
- The five domain names are formatted in this pattern: <br/> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
- - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS zone links to the private endpoint virtual network and allows the on-premises replication appliance to resolve the FQDNs to their private IP addresses.
+ - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS is then linked to the private endpoint virtual network.
-1. Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication appliance. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md).
+1. Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication appliance. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
1. After you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Learn more about how to [set up the replication appliance](./tutorial-migrate-physical-virtual-machines.md#set-up-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
Follow [these steps](./tutorial-migrate-physical-virtual-machines.md#replicate-m
In **Replicate** > **Target settings** > **Cache/Replication storage account**, use the dropdown list to select a storage account to replicate over a private link.
-If your Azure Migrate project has private endpoint connectivity, you must [grant permissions to the Recovery Services vault managed identity](#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate.
+If your Azure Migrate project has private endpoint connectivity, you must [grant permissions to the Recovery Services vault managed identity](#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate.
To enable replications over a private link, [create a private endpoint for the storage account](#create-a-private-endpoint-for-the-storage-account-optional).
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware-migration.md
You can migrate VMware VMs in a couple of ways:
Review [this article](server-migrate-overview.md) to figure out which method you want to use.
-## Agentless migration
+## Agentless migration
This section summarizes requirements for agentless VMware VM migration to Azure.
The table summarizes VMware hypervisor requirements.
-### VM requirements (agentless)
+### VM requirements (agentless)
The table summarizes agentless migration requirements for VMware VMs. **Support** | **Details** | **Supported operating systems** | You can migrate [Windows](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) and [Linux](../virtual-machines/linux/endorsed-distros.md) operating systems that are supported by Azure.
-**Windows VMs in Azure** | You might need to [make some changes](prepare-for-migration.md#verify-required-changes-before-migrating) on VMs before migration.
+**Windows VMs in Azure** | You might need to [make some changes](prepare-for-migration.md#verify-required-changes-before-migrating) on VMs before migration.
**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 8, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x <br/> - Cent OS 8, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 11, 12, 15 SP0, 15 SP1 <br/>- Ubuntu 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 7, 8, 9 <br/> Oracle Linux 6, 7.7, 7.7-CI<br/> For other operating systems you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
-**Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks.
-**UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs.
+**Boot requirements** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks.
+**UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs.
**Disk size** | up to 2 TB OS disk for gen 1 VM and gen 2 VMs; 32 TB for data disks. **Disk limits** | Up to 60 disks per VM. **Encrypted disks/volumes** | VMs with encrypted disks/volumes aren't supported for migration.
The table summarizes agentless migration requirements for VMware VMs.
**Teamed NICs** | Not supported. **IPv6** | Not supported. **Target disk** | VMs can only be migrated to managed disks (standard HDD, standard SSD, premium SSD) in Azure.
-**Simultaneous replication** | Up to 300 simultaneously replicating VMs per vCenter Server with 1 appliance. Up to 500 simultaneously replicating VMs per vCenter Server when an additional [scale-out appliance](./how-to-scale-out-for-migration.md) is deployed.
-**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. <br/> Supported for RHEL6, RHEL7, CentOS7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04.
+**Simultaneous replication** | Up to 300 simultaneously replicating VMs per vCenter Server with 1 appliance. Up to 500 simultaneously replicating VMs per vCenter Server when an additional [scale-out appliance](./how-to-scale-out-for-migration.md) is deployed.
+**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. <br/> Supported for RHEL6, RHEL7, CentOS7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04.
> [!Note] > In addition to the Internet connectivity, for Linux VMs, ensure that the following packages are installed for successful installation of Microsoft Azure Linux agent (waagent):
The table summarizes agentless migration requirements for VMware VMs.
>- Filesystem utilities: sfdisk, fdisk, mkfs, parted >- Password tools: chpasswd, sudo >- Text processing tools: sed, grep
->- Network tools: ip-route
+>- Network tools: ip-route
>- Enable rc.local service on the source VM > [!TIP]
Agentless migration uses the [Azure Migrate appliance](migrate-appliance.md). Yo
**Device** | **Connection** | Appliance | Outbound connections on port 443 to upload replicated data to Azure, and to communicate with Azure Migrate services orchestrating replication and migration.
-vCenter server | Inbound connections on port 443 to allow the appliance to orchestrate replication - create snapshots, copy data, release snapshots
-vSphere/ESXI host | Inbound on TCP port 902 for the appliance to replicate data from snapshots.
+vCenter server | Inbound connections on port 443 to allow the appliance to orchestrate replication - create snapshots, copy data, release snapshots.
+vSphere/ESXI host | Inbound on TCP port 902 for the appliance to replicate data from snapshots. Outbound port 902 from ESXi host.
-## Agent-based migration
+## Agent-based migration
This section summarizes requirements for agent-based migration.
The table summarizes VMware VM support for VMware VMs you want to migrate using
**Network/Storage** | For the latest information, review the [network](../site-recovery/vmware-physical-azure-support-matrix.md#network) and [storage](../site-recovery/vmware-physical-azure-support-matrix.md#storage) prerequisites for Site Recovery. Azure Migrate provides identical network/storage requirements. **Azure requirements** | For the latest information, review the [Azure network](../site-recovery/vmware-physical-azure-support-matrix.md#azure-vm-network-after-failover), [storage](../site-recovery/vmware-physical-azure-support-matrix.md#azure-storage), and [compute](../site-recovery/vmware-physical-azure-support-matrix.md#azure-compute) requirements for Site Recovery. Azure Migrate has identical requirements for VMware migration. **Mobility service** | The Mobility service agent must be installed on each VM you want to migrate.
-**UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs.
+**UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs.
**UEFI - Secure boot** | Not supported for migration. **Target disk** | VMs can only be migrated to managed disks (standard HDD, standard SSD, premium SSD) in Azure. **Disk size** | up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM; 32 TB for data disks.
Process server | The process server receives replication data, optimizes, and en
## Azure VM requirements
-All on-premises VMs replicated to Azure (with agentless or agent-based migration) must meet the Azure VM requirements summarized in this table.
+All on-premises VMs replicated to Azure (with agentless or agent-based migration) must meet the Azure VM requirements summarized in this table.
-**Component** | **Requirements**
+**Component** | **Requirements**
| |
-Guest operating system | Verifies supported VMware VM operating systems for migration.<br/> You can migrate any workload running on a supported operating system.
-Guest operating system architecture | 64-bit.
-Operating system disk size | Up to 2,048 GB.
-Operating system disk count | 1
-Data disk count | 64 or less.
+Guest operating system | Verifies supported VMware VM operating systems for migration.<br/> You can migrate any workload running on a supported operating system.
+Guest operating system architecture | 64-bit.
+Operating system disk size | Up to 2,048 GB.
+Operating system disk count | 1
+Data disk count | 64 or less.
Data disk size | Up to 32 TB Network adapters | Multiple adapters are supported.
-Shared VHD | Not supported.
-FC disk | Not supported.
+Shared VHD | Not supported.
+FC disk | Not supported.
BitLocker | Not supported.<br/><br/> BitLocker must be disabled before you migrate the machine.
-VM name | From 1 to 63 characters.<br/><br/> Restricted to letters, numbers, and hyphens.<br/><br/> The machine name must start and end with a letter or number.
+VM name | From 1 to 63 characters.<br/><br/> Restricted to letters, numbers, and hyphens.<br/><br/> The machine name must start and end with a letter or number.
Connect after migration-Windows | To connect to Azure VMs running Windows after migration:<br/><br/> - Before migration, enable RDP on the on-premises VM.<br/><br/> Make sure that TCP, and UDP rules are added for the **Public** profile, and that RDP is allowed in **Windows Firewall** > **Allowed Apps**, for all profiles.<br/><br/> For site-to-site VPN access, enable RDP and allow RDP in **Windows Firewall** -> **Allowed apps and features** for **Domain and Private** networks.<br/><br/> In addition, check that the operating system's SAN policy is set to **OnlineAll**. [Learn more](prepare-for-migration.md). Connect after migration-Linux | To connect to Azure VMs after migration using SSH:<br/><br/> Before migration, on the on-premises machine, check that the Secure Shell service is set to Start, and that firewall rules allow an SSH connection.<br/><br/> After failover, on the Azure VM, allow incoming connections to the SSH port for the network security group rules on the failed over VM, and for the Azure subnet to which it's connected.<br/><br/> In addition, add a public IP address for the VM. ## Next steps
-[Select](server-migrate-overview.md) a VMware migration option.
+[Select](server-migrate-overview.md) a VMware migration option.
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-network-connectivity.md
You can verify the DNS resolution for other Azure Migrate artifacts using a simi
If the DNS resolution is incorrect, follow these steps:
+**Recommended** for testing: You can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses.
- If you use a custom DNS, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration). - If you use Azure-provided DNS servers, refer to the below section for further troubleshooting.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/whats-new.md
- Support to provide multiple server credentials on Azure Migrate appliance to discover installed applications (software inventory), agentless dependency analysis and discover SQL Server instances and databases in your VMware environment. [Learn more](tutorial-discover-vmware.md#provide-server-credentials) - Discovery and assessment of SQL Server instances and databases running in your VMware environment is now in preview. [Learn More](concepts-azure-sql-assessment-calculation.md) Refer to the [Discovery](tutorial-discover-vmware.md) and [assessment](tutorial-assess-sql.md) tutorials to get started. - Agentless VMware migration now supports concurrent replication of 500 VMs per vCenter.
+- Azure Migrate: App Containerization tool now lets you package applications running on servers into a container image and deploy the containerized application to Azure Kubernetes Service.
+For more information, see [ASP.NET app containerization and migration to Azure Kubernetes Service](tutorial-app-containerization-aspnet-kubernetes.md) and [Java web app containerization and migration to Azure Kubernetes Service](tutorial-app-containerization-java-kubernetes.md) tutorials to get started.
+ ## Update (January 2021) - Azure Migrate: Server Migration tool now lets you migrate VMware virtual machines, physical servers, and virtual machines from other clouds to Azure virtual machines with disks encrypted with server-side encryption with customer-managed keys (CMK).
mysql App Development Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/app-development-best-practices.md
Last updated 08/11/2020
-# Best practices for building an application with Azure Database for MySQL
+# Best practices for building an application with Azure Database for MySQL
-Here are some best practices to help you build a cloud-ready application by using Azure Database for MySQL. These best practices can reduce development time for your app.
+Here are some best practices to help you build a cloud-ready application by using Azure Database for MySQL. These best practices can reduce development time for your app.
## Configuration of application and database resources ### Keep the application and database in the same region
-Make sure all your dependencies are in the same region when deploying your application in Azure. Spreading instances across regions or availability zones creates network latency, which might affect the overall performance of your application.
+Make sure all your dependencies are in the same region when deploying your application in Azure. Spreading instances across regions or availability zones creates network latency, which might affect the overall performance of your application.
### Keep your MySQL server secure
-Configure your MySQL server to be [secure](./concepts-security.md) and not accessible publicly. Use one of these options to secure your server:
+Configure your MySQL server to be [secure](./concepts-security.md) and not accessible publicly. Use one of these options to secure your server:
- [Firewall rules](./concepts-firewall-rules.md)-- [Virtual networks](./concepts-data-access-and-security-vnet.md)
+- [Virtual networks](./concepts-data-access-and-security-vnet.md)
- [Azure Private Link](./concepts-data-access-security-private-link.md)
-For security, you must always connect to your MySQL server over SSL and configure your MySQL server and your application to use TLS 1.2. See [How to configure SSL/TLS](./concepts-ssl-connection-security.md).
+For security, you must always connect to your MySQL server over SSL and configure your MySQL server and your application to use TLS 1.2. See [How to configure SSL/TLS](./concepts-ssl-connection-security.md).
+
+### Use advanced networking with AKS
+When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. To learn more , see [Best practices for Azure Kubernetes Service and Azure Database for MySQL](concepts-aks.md)
### Tune your server parameters For read-heavy workloads tuning server parameters, `tmp_table_size` and `max_heap_table_size` can help optimize for better performance. To calculate the values required for these variables, look at the total per-connection memory values and the base memory. The sum of per-connection memory parameters, excluding `tmp_table_size`, combined with the base memory accounts for total memory of the server.
To calculate the largest possible size of `tmp_table_size` and `max_heap_table_s
> > Base memory indicates the memory variables, like `query_cache_size` and `innodb_buffer_pool_size`, that MySQL will initialize and allocate at server start. Per-connection memory, like `sort_buffer_size` and `join_buffer_size`, is memory that's allocated only when a query needs it.
-### Create non-admin users
+### Create non-admin users
[Create non-admin users](./howto-create-users.md) for each database. Typically, the user names are identified as the database names. ### Reset your password
-You can [reset your password](./howto-create-manage-server-portal.md#update-admin-password) for your MySQL server by using the Azure portal.
+You can [reset your password](./howto-create-manage-server-portal.md#update-admin-password) for your MySQL server by using the Azure portal.
Resetting your server password for a production database can bring down your application. It's a good practice to reset the password for any production workloads at off-peak hours to minimize the impact on your application's users.
-## Performance and resiliency
+## Performance and resiliency
Here are a few tools and practices that you can use to help debug performance issues with your application. ### Enable slow query logs to identify performance issues
-You can enable [slow query logs](./concepts-server-logs.md) and [audit logs](./concepts-audit-logs.md) on your server. Analyzing slow query logs can help identify performance bottlenecks for troubleshooting.
+You can enable [slow query logs](./concepts-server-logs.md) and [audit logs](./concepts-audit-logs.md) on your server. Analyzing slow query logs can help identify performance bottlenecks for troubleshooting.
Audit logs are also available through Azure Diagnostics logs in Azure Monitor logs, Azure Event Hubs, and storage accounts. See [How to troubleshoot query performance issues](./howto-troubleshoot-query-performance.md). ### Use connection pooling
-Managing database connections can have a significant impact on the performance of the application as a whole. To optimize performance, you must reduce the number of times that connections are established and the time for establishing connections in key code paths. Use [connection pooling](./concepts-connectivity.md#access-databases-by-using-connection-pooling-recommended) to connect to Azure Database for MySQL to improve resiliency and performance.
+Managing database connections can have a significant impact on the performance of the application as a whole. To optimize performance, you must reduce the number of times that co